US8065370B2 - Proofs to filter spam - Google Patents

Proofs to filter spam Download PDF

Info

Publication number
US8065370B2
US8065370B2 US11/265,842 US26584205A US8065370B2 US 8065370 B2 US8065370 B2 US 8065370B2 US 26584205 A US26584205 A US 26584205A US 8065370 B2 US8065370 B2 US 8065370B2
Authority
US
United States
Prior art keywords
outgoing message
proof
spam
message
result
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related, expires
Application number
US11/265,842
Other versions
US20070100949A1 (en
Inventor
Geoffrey J Hulten
Gopalakrishnan Seshadrinathan
Joshua T. Goodman
Manav Mishra
Robert C J Pengelly
Robert L. Rounthwaite
Ryan C Colvin
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Microsoft Technology Licensing LLC
Original Assignee
Microsoft Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Microsoft Corp filed Critical Microsoft Corp
Priority to US11/265,842 priority Critical patent/US8065370B2/en
Assigned to MICROSOFT CORPORATION reassignment MICROSOFT CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: ROUNTHWAITE, ROBERT L., COLVIN, RYAN C, HULTEN, GEOFFREY J, MISHRA, MANAV, SESHADRINATHAN, GOPALAKRISHNAN, PENGELLY, ROBERT CJ, GOODMAN, JOSHUA T.
Publication of US20070100949A1 publication Critical patent/US20070100949A1/en
Application granted granted Critical
Publication of US8065370B2 publication Critical patent/US8065370B2/en
Assigned to MICROSOFT TECHNOLOGY LICENSING, LLC reassignment MICROSOFT TECHNOLOGY LICENSING, LLC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: MICROSOFT CORPORATION
Expired - Fee Related legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L51/00User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
    • H04L51/21Monitoring or handling of messages
    • H04L51/212Monitoring or handling of messages using filtering or selective blocking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management

Definitions

  • Email i.e., electronic mail
  • Email employs standards and conventions for addressing and routing such that the email may be delivered across a network, such as the Internet, utilizing a plurality of devices.
  • email may be transferred within a company over an intranet, across the world using the Internet, and so on.
  • Spam is typically thought of as an email that is sent to a large number of recipients, such as to promote a product or service. Because sending an email generally costs the sender little or nothing to send, “spammers” have developed which send the equivalent of junk mail to as many users as can be located. Even though a minute fraction of the recipients may actually desire the described product or service, this minute fraction may be enough to offset the minimal costs in sending the spam. Consequently, a vast number of spammers are responsible for communicating a vast number of unwanted and irrelevant emails. Thus, a typical user may receive a large number of these irrelevant emails, thereby hindering the user's interaction with relevant emails. In some instances, for example, the user may be required to spend a significant amount of time interacting with each of the unwanted emails in order to determine which, if any, of the emails received by the user might actually be of interest.
  • Proofs may be utilized to indicate at least a minimal amount of resources were utilized by a sender in sending a message, thereby indicating that the sender is not likely a “spammer”. Additionally, different proofs may utilize different amounts of resources. The different proofs, therefore, may be used for different likelihoods that a message will be considered spam. For instance, a client may use a locally-executable spam filter to determine a relative likelihood that a message will be considered spam and select a proof to provide a proportional level of “proof” to the message, thereby increasing the likelihood that the message will not be considered as “spam” by a recipient of the message, e.g., a communication service that communicates the message to an intended recipient and/or the intended recipient itself.
  • a recipient of the message e.g., a communication service that communicates the message to an intended recipient and/or the intended recipient itself.
  • FIG. 1 is an illustration of an environment operable for communication of messages, such as emails, instant messages, and so on, across a network and is also operable to employ proof strategies.
  • FIG. 2 is an illustration of a system in an exemplary implementation showing a plurality of clients and a communication service of FIG. 1 in greater detail.
  • FIG. 3 is a flow chart depicting a procedure in an exemplary implementation in which processing of a message is performed by local spam filters to determine whether a result of a proof should be included before communication of the message to an intended recipient.
  • FIG. 4 is a flow chart depicting a procedure in an exemplary implementation in which one or more proofs are selected based on a relative likelihood that a message will be considered spam.
  • FIG. 5 is a flow chart depicting a procedure in an exemplary implementation in which receiver-driven computation is performed.
  • proofs may be utilized to differentiate between legitimate messages and messages that are sent by a spammer. For example a proof may be computed that requires a significant amount of resources (e.g., processing and/or memory resources) to be utilized in the computation over that typically required to send a message by a sender.
  • resources e.g., processing and/or memory resources
  • a “memory bound” proof may rely on memory latency to slow down computations that could be quickly performed if performed by a processor alone and therefore require an amount of time to process by a computing device. Therefore, presence of this result may indicate that the sender of the message performed the computation and therefore is not likely a spammer, which may therefore be used when processing the message, such as by a spam filter.
  • a computational proof having a particular amount of difficulty may provide a certain amount of protection
  • a computation proof having a greater amount of difficulty may be used to provide a corresponding greater amount of protection. Therefore, a sender may be “aware” of these levels and try to “guess” a proper amount of proof (e.g., difficulty) to be included with the message when communicated.
  • a sender may be “aware” of these levels and try to “guess” a proper amount of proof (e.g., difficulty) to be included with the message when communicated.
  • senders of messages that do not look like spam may use relatively little proof while senders of messages that look like spam (e.g., a spammer) may use relatively larger amounts of proof. This improves the user experience for “good” users by allowing efficient use of proof that addresses the likely processing that will be performed on the message before the message is communicated.
  • FIG. 1 is an illustration of an environment 100 operable for communication of messages across a network.
  • the environment 100 is illustrated as including a plurality of clients 102 ( 1 ), . . . , 102 (N) that are communicatively coupled, one to another, over a network 104 .
  • the plurality of clients 102 ( 1 )- 102 (N) may be configured in a variety of ways.
  • one or more of the clients 102 ( 1 )- 102 (N) may be configured as a computer that is capable of communicating over the network 104 , such as a desktop computer, a mobile station, a game console, an entertainment appliance, a set-top box communicatively coupled to a display device, a wireless phone, and so forth.
  • the clients 102 ( 1 )- 102 (N) may range from full resource devices with substantial memory and processor resources (e.g., personal computers, television recorders equipped with hard disk) to low-resource devices with limited memory and/or processing resources (e.g., traditional set-top boxes).
  • the clients 102 ( 1 )- 102 (N) may also relate to a person and/or entity that operate the client.
  • client 102 ( 1 )- 102 (N) may describe a logical client that includes a user, software and/or a machine.
  • the network 104 may assume a wide variety of configurations.
  • the network 104 may include a wide area network (WAN), a local area network (LAN), a wireless network, a public telephone network, an intranet, and so on.
  • WAN wide area network
  • LAN local area network
  • wireless network a public telephone network
  • intranet an intranet
  • the network 104 may be configured to include multiple networks.
  • clients 102 ( 1 ), 102 (N) may be communicatively coupled via a peer-to-peer network to communicate, one to another.
  • Each of the clients 102 ( 1 ), 102 (N) may also be communicatively coupled to one or more of a plurality of communication services 106 ( m ) (where “m” can be any integer form one to “M”) over the Internet.
  • Each of the plurality of clients 102 ( 1 ), . . . , 102 (N) is illustrated as including a respective one of a plurality of communication modules 108 ( 1 ), . . . , 108 (N).
  • each of the plurality of communication modules 108 ( 1 )- 108 (N) is executable on a respective one of the plurality of clients 102 ( 1 )- 102 (N) to send and receive messages.
  • one or more of the communication modules 108 ( 1 )- 108 (N) may be configured to send and receive email.
  • email employs standards and conventions for addressing and routing such that the email may be delivered across the network 104 utilizing a plurality of devices, such as routers, other computing devices (e.g., email servers), and so on.
  • emails may be transferred within a company over an intranet, across the world using the Internet, and so on.
  • An email may include a header, text, and attachments, such as documents, computer-executable files, and so on.
  • the header contains technical information about the source and oftentimes may describe the route the message took from sender to recipient.
  • one or more of the communication modules 108 ( 1 )- 108 (N) may be configured to send and receive instant messages.
  • Instant messaging provides a mechanism such that each of the clients 102 ( 1 )- 102 (N), when participating in an instant messaging session, may send text messages to each other.
  • the instant messages are typically communicated in real time, although delayed delivery may also be utilized, such as by logging the text messages when one of the clients 102 ( 1 )- 102 (N) is unavailable, e.g., offline.
  • instant messaging may be thought of as a combination of email and Internet chat in that instant messaging supports message exchange and is designed for two-way live chats. Therefore, instant messaging may be utilized for synchronous communication. For instance, like a voice telephone call, an instant messaging session may be performed in real-time such that each user may respond to each other user as the instant messages are received.
  • the communication modules 106 ( 1 )- 106 (N) communicate with each other through use of the communication service 106 ( m ).
  • client 102 ( 1 ) may form a message using communication module 108 ( 1 ) and send that message over the network 104 to the communication service 106 ( m ) which is stored as one of a plurality of messages 110 ( j ), where “j” can be any integer from one to “J”, in storage 112 ( m ) through execution of a communication manager module 114 ( m ).
  • Client 102 (N) may then “log on” to the communication service (e.g., by providing a name and password) and retrieve corresponding messages from storage 112 ( m ) through execution of the communication module 108 (N).
  • a variety of other examples are also contemplated.
  • client 102 ( 1 ) may cause the communication module 108 ( 1 ) to form an instant message for communication to client 102 (N).
  • the communication module 108 ( 1 ) is executed to communicate the instant message to the communication service 106 ( m ), which then executes the communication manager module 114 ( m ) to route the instant message to the client 102 (N) over the network 104 .
  • the client 102 (N) receives the instant message and executes the respective communication module 108 (N) to display the instant message to a respective user.
  • the instant messages are communicated without utilizing the communication service 106 ( m ).
  • messages configured as emails and instant messages have been described, a variety of textual and non-textual messages (e.g., graphical messages, audio messages, and so on) may be communicated via the environment 100 without departing from the sprit and scope thereof.
  • computational proofs can be utilized for a wide variety of other communication techniques, such as to determine if a user will accept a voice-over-IP (VOIP) call or route the call to voicemail.
  • VOIP voice-over-IP
  • Spam is typically provided via email that is sent to a large number of recipients, such as to promote a product or service. Thus, spam may be thought of as an electronic form of “junk” mail. Because a vast number of emails may be communicated through the environment 100 for little or no cost to the sender, a vast number of spammers are responsible for communicating a vast number of unwanted and irrelevant messages. Thus, each of the plurality of clients 102 ( 1 )- 102 (N) may receive a large number of these irrelevant messages, thereby hindering the client's interaction with actual messages of interest.
  • proofs provide a technique that allows a sender of a message to prove their “non-spammer” intentions through use of a proof that enables the sender to indicate that a significant amount of hardware and/or software resources were expended by the client in the communication of the message.
  • clients 102 ( 1 )- 102 (N) are each illustrated as including a respective plurality of proofs 116 ( f ), 116 ( g ), where “f” and “g” can be any integer from one to “F” and “G”, respectively.
  • Proof of effort algorithms generally involve use of a significant amount of computing resources (e.g., hardware and software resources) when solving a defined proof, e.g., a hash collision, a solution to a cryptographic problem, a solution to a memory bound problem, a solution to a reverse Turing test, and so on.
  • a defined proof e.g., a hash collision, a solution to a cryptographic problem, a solution to a memory bound problem, a solution to a reverse Turing test, and so on.
  • it typically requires few resources for a spammer to send a message. Therefore, by indicating that resources have been utilized by a sender of the message, the sender may indicate a decreased likelihood of being a spammer.
  • the communication service 102 ( m ) is also illustrated as including a plurality of proofs 116 ( h ), where “h” can be any integer from one to “H”, which are stored in storage 118 ( m ). Therefore, the communication service 102 ( m ) in this instance may be used on part of one or more of the clients 102 ( 1 )- 102 (N) in the performance of the proofs 116 ( h ).
  • a third party 120 may also compute one or more of a plurality of proofs 116 ( i ) (where “i” can be any integer from one to “I”) which are illustrated as stored in storage 122 .
  • the third party 120 may be configured as a web service to compute the proofs 116 ( i ) when one or more of the clients 102 ( 1 )- 102 (N) is configured as a “thin” client as previously described. Therefore, the thin client may offload the computation of the proof to the third party to compute the proof.
  • the third party 120 is another computing device that is owned/accessible by the user (e.g., a desktop computer, work server, and so on) such that the user may transfer computation of the proofs between the user's computing devices before output to an intended recipient, such as from a wireless phone to a home computer, after which the message is then communicated for receipt by an intended recipient.
  • an intended recipient such as from a wireless phone to a home computer
  • spam filters employed in the environment 100 may take this into account when processing a message.
  • clients 102 ( 1 )- 102 (N) each include respective spam filters 124 ( 1 )- 124 (N) which are utilized to process messages received by the clients in order to “filter out” spam from legitimate messages.
  • Spam filters 124 ( 1 )- 124 (N) may utilize a variety of techniques for filtering spam, such as through examination of message text, indicated sender, domains, and so on.
  • the spam filters 124 ( 1 )- 124 (N) when processing the messages, may also take into account whether the message includes a result of a computational proof when determining whether the message is spam.
  • Similar functionality may be employed by the spam filters 124 ( m ) provided on the communication service 102 ( m ). Therefore, a result of a computational proof may be utilized to obtain “safe passage” of the message through spam filters 124 ( 1 ), 124 (N), 124 ( m ) employed in the environment 100 .
  • the spam filters 124 ( 1 )- 124 (N) may also be configured to address the amount of computation utilized to perform the respective proofs when determining whether or not a message is spam, further discussion of which may be found in relation to the following figure.
  • any of the functions described herein can be implemented using software, firmware (e.g., fixed logic circuitry), manual processing, or a combination of these implementations.
  • the terms “module,” “functionality,” and “logic” as used herein generally represent software, firmware, or a combination of software and firmware.
  • the module, functionality, or logic represents program code that performs specified tasks when executed on a processor (e.g., CPU or CPUs).
  • the program code can be stored in one or more computer readable memory devices, further description of which may be found in relation to FIG. 2 .
  • the features of the proof strategies described below are platform-independent, meaning that the strategies may be implemented on a variety of commercial computing platforms having a variety of processors.
  • FIG. 2 is an illustration of a system 200 in an exemplary implementation showing the plurality of clients 102 ( n ) and the communication service 106 ( m ) of FIG. 1 in greater detail.
  • Client 102 ( n ) is representative of any of the plurality of clients 102 ( 1 )- 102 (N) of FIG. 1 , and therefore reference will be made to client 102 ( n ) in both singular and plural form.
  • the communication service 102 ( m ) is illustrated as being implemented by a plurality of servers 202 ( s ), where “s” can be any integer from one to “S”, and the client 102 ( n ) is illustrated as a client device.
  • the servers 202 ( s ) and the clients 102 ( n ) are illustrated as including respective processors 204 ( s ), 206 ( n ) and respective memory 208 ( s ), 210 ( n ).
  • processors are not limited by the materials from which they are formed or the processing mechanisms employed therein.
  • processors may be comprised of semiconductor(s) and/or transistors (e.g., electronic integrated circuits (ICs)).
  • processor-executable instructions may be electronically-executable instructions.
  • the mechanisms of or for processors, and thus of or for a computing device may include, but are not limited to, quantum computing, optical computing, mechanical computing (e.g., using nanotechnology), and so forth.
  • memory 208 ( s ), 210 ( n ) may be representative of a wide variety of types and combinations of memory may be employed, such as random access memory (RAM), hard disk memory, removable medium memory, and other computer-readable media.
  • RAM random access memory
  • the clients 102 ( n ) are illustrated as executing the communication module 108 ( n ) and the spam filters on the processor 206 ( n ), which are also storable in memory 210 ( n ). Additionally, the communication module 108 ( n ) is illustrated as including a proof module 212 ( n ), which is representative of functionality to select and perform proofs 116 ( 1 ), . . . , 116 ( y ), . . . , 116 (Y) (which may or may not correspond to the proofs 116 ( f ), 116 ( g ) of FIG. 1 ). For example, the communication module 108 ( n ), when executed, may be utilized to form a message for communication over the network 104 .
  • the communication module 108 ( n ) may process the message using the client's 102 ( n ) spam filter 124 ( n ) to determine a likelihood of whether the message, as is, will be considered spam by an intended recipient and/or a communication service 106 ( m ) configured to communicate the message to the intended recipient.
  • the proof module 212 ( n ) may perform one or more of the proofs 116 ( 1 )- 116 (Y), a result of which is then combined with the message before communication over the network 104 .
  • the client 102 ( n ) may indicate, through the use of the proof, that the message is not spam and in this case the client makes the determination of whether to even perform one or more of the proofs 116 ( 1 )- 116 (Y) without contacting an intended recipient before hand. Further discussion of processing a message by a spam filter before communication over the network may be found in relation to FIG. 3 .
  • proofs 116 ( 1 )- 116 (Y) may require different amounts of resources to be performed, which is illustrated in FIG. 2 by an arrow 214 that indicates that proof 116 (Y) is more resource intensive than proof 116 ( y ), which is more resource intensive than proof 116 ( 1 ).
  • different proof mechanisms may include parameters that specify a particular difficult, e.g., in a hash collision case an “N” bit collision may be utilized in which as “N” increases computation time increases exponentially.
  • These differences in resource amounts may also be utilized in conjunction with an indication of a relative likelihood that the message will be considered spam to select an appropriate proof 116 ( 1 )- 116 (Y) to be performed before communication of the message.
  • a message that, when processed by the spam filter 124 ( n ) indicates a relatively low likelihood of being considered spam may include a result of a proof 116 ( 1 ) that consumes relatively low resources, when performed.
  • a message that, when processed by the spam filter 124 ( n ) indicates a relatively high likelihood of being considered spam may include a proof 116 (Y) that consumes a relatively high amount of resources.
  • the proof module 214 may select the proof 116 ( 1 )- 116 (Y), and even choose to forgo inclusion of a proof, in a manner which conserves resources of the client 102 ( n ) yet still indicates that the client 102 ( n ) is not a spammer. Further discussion of proof selection may be found in relation to FIG. 4 .
  • the results of the proofs 116 ( 1 )- 116 (Y) may be combined with a variety of identifying mechanisms 216 ( x ) that may also indicate a relative likelihood that a message is spam and/or sent by a spammer.
  • the communication modules 108 ( n ) and/or manager module 114 ( m ) gather and validate messages utilizing one or more applicable identifying mechanisms 216 ( x ).
  • the identifying mechanisms 216 ( x ) may involve checking that part of a message is signed with a specific private key, that a message was sent from a machine that is approved via a sender's identification for a specified domain, and so on.
  • a variety of identifying mechanisms 216 ( x ) and combinations thereof may be employed by the communication modules 108 ( n ), 114 ( m ), and/or the spam filters 124 ( n ), 124 ( m ), examples of which are described as follows.
  • the email address is a standard form of identity.
  • the email address may be checked by looking at a ‘FROM’ line in the header of a message.
  • the email address may be particularly vulnerable to attack, a combination of the email address and another one of the identifying mechanisms 216 ( x ) and/or the proofs 116 ( 1 )- 116 (Y) may result in substantial protection.
  • Third party certificates may involve the signing of a portion of a message with a certificate that can be traced to a third-party certifier.
  • This signature can be attached utilizing a variety of techniques, such as through secure/multipurpose Internet mail extension (S/MIME) techniques, e.g., by including a header in the message that contains the signature.
  • S/MIME secure/multipurpose Internet mail extension
  • the level of security provided by this technique may also be based on the reputation of the third party certifier, a type of certificate (e.g. some certifiers offer several levels of increasingly secure certification), and on the amount of the message signed (signing more of the message is presumably more secure).
  • a self-signed certificate involves signing a portion of a message with a certificate that the sender created. Like a third-party certificate, this identifying mechanism may be attached using a variety of techniques, such as through secure/multipurpose Internet mail extension (S/MIME) techniques, e.g., by including a header in the message that contains the signature.
  • S/MIME secure/multipurpose Internet mail extension
  • use of a self-signed certificate involves the creation of a public/private key pair by a sender, signing part of the message with the private key, and distributing the public key in the message (or via other standard methods). The level of security provided by this method is based on the amount of the message signed.
  • the passcode identifying mechanism involves the use of a passcode in a message, such as by including a public key in a message but not signing any portion of the message with the associated private key.
  • This identity mechanism may be useful for users who have mail transfer agents that modify messages in transfer and destroy the cryptographic properties of signatures, such that the signatures cannot be verified.
  • This identifying mechanism is useful as a lightweight way to establish a form of identity.
  • a passcode is still potentially spoofable, the passcode may be utilized with other identifying mechanisms to provide greater likelihood of verification (i.e., authenticity of the sender's identity).
  • the IP address identifying mechanism involves validating whether a message was sent from a particular IP address or IP address range (e.g. the IP/24 range 204.200.100.*). In an implementation, this identity mechanism may support a less secure mode in which the IP address/range may appear in any of a message's “received” header lines. As before, the use of a particular IP address, IP address range, and/or where the IP address or range may be located in a message can serve as a basis for a relative likelihood that the message was sent from a spammer.
  • a particular IP address, IP address range, and/or where the IP address or range may be located in a message can serve as a basis for a relative likelihood that the message was sent from a spammer.
  • the valid Sender ID identifying mechanism involves validating whether a message was sent from a computer that is authorized to send messages for a particular domain via the Sender's ID. For example, reference may be made to a trusted domain. For instance, “test@test.com” is an address and “test.com” is the domain. It should be noted that the domain does not need to match exactly, e.g. the domain could also formatted as foo.test.com. When a message from this address is received, the communication module 108 ( n ) may perform a Sender ID test on the “test.com” domain, and if the message matches the entry, it is valid.
  • This identifying mechanism can also leverage algorithms for detecting IP addresses in clients and any forthcoming standards for communicating IP addresses from edge servers, standards for communicating the results of Sender ID checks from the edge servers, and so on. Additionally, it should be noted that the Sender ID test is not limited to any particular sender identification technique or framework (e.g., sender policy framework (SPF), sender ID framework from MICROSOFT (Microsoft is a trademark of the Microsoft Corporation, Redmond, Wash.), and so on), but may include any mechanism that provides for authentication of a user or domain.
  • sender policy framework SPF
  • sender ID framework from MICROSOFT (Microsoft is a trademark of the Microsoft Corporation, Redmond, Wash.)
  • the monetary attachment identifying mechanism involves inclusion of a monetary amount to a message for sending, in what may be referred to as an “e-stamp”. For example, a sender of the message may attach a monetary amount to the message that is credited to the recipient. By attaching even a minimal monetary amount, the likelihood of a spammer sending a multitude of such messages may decrease, thereby increasing the probability that the sender is not a spammer.
  • a variety of other techniques may also be employed for monetary attachment, such as through a central clearinghouse on the Internet that charges for certifying messages. Therefore, a certificate included with the message may act to verify that the sender paid an amount of money to send the message.
  • identifying mechanisms have been described, a variety of other identifying mechanisms 216 ( x ) may also be employed without departing from the sprit and scope thereof. Further discussion of message processing may be found in relation to the following figures.
  • FIG. 3 depicts a procedure 300 in an exemplary implementation in which processing of a message is performed by local spam filters to determine whether a result of a proof should be included before communication of the message to an intended recipient.
  • a message is formed for communication over a network (block 302 ).
  • the communication module 108 ( 1 ) may be executed to compose an email, an instant message, and so on.
  • the message is then processed using one or more spam filters (block 403 ).
  • the communication module 108 ( 1 ) may forward the composed message to spam filters 124 ( 1 ) that are local on the client 102 ( 1 ).
  • an indication is received as to whether the message is considered to be spam (block 306 ).
  • the indication for instance, may be configured as a binary indictor (e.g., “yes” or “no”) as to whether the message is considered spam by that spam filter 124 ( 1 ). Therefore, the indication is utilized to determine whether the message is considered spam (decision block 308 ).
  • the message is output for communication to an intended recipient over a network (block 310 ).
  • the client 102 ( 1 ) determines that the message is not likely to be considered spam by the intended recipient, and therefore may simply communicate the message without performing another action.
  • a proof is computed (block 312 ).
  • a result of the computation and the message are then output for communication to an intended recipient over a network (block 314 ).
  • the client 102 ( 1 ) determines that the message is likely considered to be spam and therefore computes a proof to indicate the “non-spammer” intentions of the client 102 ( 1 ).
  • a relative likelihood (e.g., a score) may also be output and leveraged by the computational proofs.
  • an additional threshold may be utilized in conjunction with the spam filter's indication to protect from spam filters that are likely to be more aggressive than the spam filter employed by the client 102 ( 1 ), such as spam filter employed by a communication service 106 ( m ).
  • the additional threshold may account for out-of-date spam filters that find the message “more spammy” than the sender's filter.
  • the threshold may be based on an update frequency of the spam filter 124 ( 1 ), with more rapid updates requiring smaller thresholds.
  • logic may be employed for specific intended recipients and/or communicators of the message. For instance, a particular communication service may filter more aggressively, and therefore a larger threshold may be employed.
  • messages that are sent to recipients within a local domain are not pre-processed, e.g., when recipients are located on a global address list, when recipients are included in a local domain of a sender, and son on.
  • a variety of other instances are also contemplated, an example of which is described as follows.
  • FIG. 4 depicts a procedure 400 in an exemplary implementation in which one or more proofs are selected based on a relative likelihood that a message will be considered spam.
  • an indication of “spamminess” of a message may be relative, such as provided by a score in which higher numbers indicate an increased likelihood of being spam.
  • This relative likelihood may also be utilized to select one or more proofs such that different “levels” of proof may be employed based on the relative likelihood of the message being considered spam.
  • a message is processed by one or more spam filters (block 402 ) and an indication is received of a relative likelihood that the message is considered to be spam (block 404 ), such as a numerical score, a relative indication of a degree of “spamminess”, and so on.
  • One or more of a plurality of proofs are then selected based on the relative likelihood (block 406 ).
  • the communication module 108 ( 1 ) may determine a level of proof that is proportion to the apparent “spamminess” of the message. For example, if the message is almost certainly not spam, the client 102 ( 1 ) may select a proof requiring a minimal amount of resources to compute. However, if the message is significantly “spammy”, the client 102 ( 1 ) may select one or more proofs requiring a significantly greater amount of resources to compute.
  • the selected one or more proofs are then computed (block 408 ) and the message and a result of the computation is output for communication to an intended recipient over a network (block 410 ).
  • the “amount” of proof is selected based on a guess as to how much proof will be required to bypass the intended recipient's, as well as communication services that communicate the message, spam filters. This guess may also be based on the local spam filter 124 ( 1 ) (e.g., is it up-to-date), knowledge of receiver's filters (e.g., the communication service 106 ( m ) employs aggressive spam filters), and so on.
  • the computations performed were “sender driven”, in that, the sender (e.g., client 102 ( 1 )) made a guess as to whether the recipients (e.g., communication service 106 ( m ) and client 102 (N)) would consider the message to be spam. This determination may also be made, at least in part, through communication with a recipient of the message, an example of which is described in relation to the following figure.
  • FIG. 5 depicts a procedure 500 in an exemplary implementation in which receiver-driven computation is performed.
  • a message is received over a network (block 502 ) and processed using one or more spam filters (block 504 ).
  • the communication service 106 ( m ) may receive a message from client 102 ( 1 ) and process the message using the spam filters 124 ( m ).
  • An indication is then received of a relative likelihood that the message is spam (block 506 ).
  • the indication may be configured as a numerical score, which may then be utilized to determine a proportional amount of proof (e.g., more or less computation) such that, when included, the message is not considered to be spam. Additional indicators may also be utilized when making this determination, such as through use of the identity mechanisms 216 ( x ) previously described in relation to FIG. 2 . Thus, a variety of factors may be utilized to determine the “amount” of proof to be included with the message.
  • a receiver e.g., a communication service 102 ( m ) and/or the client 102 (N) that is the intended recipient
  • the recipient may communicate back that the sender's “guess” was wrong. Further, the recipient may also “give credit” to previous amounts of “proof” that were included in the message when requiring the additional proof, e.g., the sender's guess plus the additional proof required equals the minimum amount of proof needed to allow the message to be routed to a user's inbox.
  • this cost may put an asymmetric burden of proof on spammers because receivers will require larger amount of proof before the receiver is willing to place a “spammy” message in the intended recipient's inbox.
  • spam filters are not synchronized, e.g., one spam filter has been updated and another one has not.
  • the sender e.g., client 102 ( 1 )
  • client 102 (N) might “guess” incorrectly, and therefore messages sent by the sender may end up in the intended recipients' (e.g., client 102 (N)) “junk” mail folder. Therefore, by requesting additional proof, this situation may be avoided.
  • a recipient e.g., the communication service 102 ( m ) and/or the intended recipient, client 102 (N)
  • the recipient may require a certain minimum amount of proof before requesting additional proof from a sender.
  • the amount of initial proof may be set such that using receiver-driven computation as a surrogate for web bugs and address book mining is uneconomical for spammers.
  • the “challenge” may be limited to instances in which the sender indicated a willingness to receive challenges, such as in an email header field.

Abstract

Embodiments of proofs to filter spam are presented herein. Proofs are utilized to indicate a sender used a set amount of computer resources in sending a message in order to demonstrate the sender is not a “spammer”. Varying the complexity of the proofs, or the level of resources used to send the message, will indicate to the recipient the relative likelihood the message is spam. Higher resource usage indicates that the message may not be spam, while lower resource usage increases the likelihood a message is spam. Also, if the recipient requires a higher level of proof than received, the receiver may request the sender send additional proof to verify the message is not spam.

Description

BACKGROUND
The prevalence of message communication continues to increase as users utilize a wide variety of computing devices to communicate, one to another. For example, users may use desktop computers, wireless phones, and so on, to communicate through the use of email (i.e., electronic mail). Email employs standards and conventions for addressing and routing such that the email may be delivered across a network, such as the Internet, utilizing a plurality of devices. Thus, email may be transferred within a company over an intranet, across the world using the Internet, and so on.
Unfortunately, as the prevalence of these techniques for sending messages has continued to expand, the amount of “spam” encountered by the user has also continued to increase. Spam is typically thought of as an email that is sent to a large number of recipients, such as to promote a product or service. Because sending an email generally costs the sender little or nothing to send, “spammers” have developed which send the equivalent of junk mail to as many users as can be located. Even though a minute fraction of the recipients may actually desire the described product or service, this minute fraction may be enough to offset the minimal costs in sending the spam. Consequently, a vast number of spammers are responsible for communicating a vast number of unwanted and irrelevant emails. Thus, a typical user may receive a large number of these irrelevant emails, thereby hindering the user's interaction with relevant emails. In some instances, for example, the user may be required to spend a significant amount of time interacting with each of the unwanted emails in order to determine which, if any, of the emails received by the user might actually be of interest.
SUMMARY
Proof techniques to filter spam are described. Proofs may be utilized to indicate at least a minimal amount of resources were utilized by a sender in sending a message, thereby indicating that the sender is not likely a “spammer”. Additionally, different proofs may utilize different amounts of resources. The different proofs, therefore, may be used for different likelihoods that a message will be considered spam. For instance, a client may use a locally-executable spam filter to determine a relative likelihood that a message will be considered spam and select a proof to provide a proportional level of “proof” to the message, thereby increasing the likelihood that the message will not be considered as “spam” by a recipient of the message, e.g., a communication service that communicates the message to an intended recipient and/or the intended recipient itself.
This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used as an aid in determining the scope of the claimed subject matter.
BRIEF DESCRIPTION OF THE DRAWINGS
FIG. 1 is an illustration of an environment operable for communication of messages, such as emails, instant messages, and so on, across a network and is also operable to employ proof strategies.
FIG. 2 is an illustration of a system in an exemplary implementation showing a plurality of clients and a communication service of FIG. 1 in greater detail.
FIG. 3 is a flow chart depicting a procedure in an exemplary implementation in which processing of a message is performed by local spam filters to determine whether a result of a proof should be included before communication of the message to an intended recipient.
FIG. 4 is a flow chart depicting a procedure in an exemplary implementation in which one or more proofs are selected based on a relative likelihood that a message will be considered spam.
FIG. 5 is a flow chart depicting a procedure in an exemplary implementation in which receiver-driven computation is performed.
The same reference numbers are utilized in instances in the discussion to reference like structures and components.
DETAILED DESCRIPTION
Overview
As the prevalence of techniques for sending messages has continued to expand, the amount of “spam” encountered by the user has also continued to increase. Therefore, proofs may be utilized to differentiate between legitimate messages and messages that are sent by a spammer. For example a proof may be computed that requires a significant amount of resources (e.g., processing and/or memory resources) to be utilized in the computation over that typically required to send a message by a sender. A “memory bound” proof, for instance, may rely on memory latency to slow down computations that could be quickly performed if performed by a processor alone and therefore require an amount of time to process by a computing device. Therefore, presence of this result may indicate that the sender of the message performed the computation and therefore is not likely a spammer, which may therefore be used when processing the message, such as by a spam filter.
Additionally, different “levels” of proof may also be employed. For example, a computational proof having a particular amount of difficulty (e.g., requiring a certain amount of computer resources) may provide a certain amount of protection, while a computation proof having a greater amount of difficulty may be used to provide a corresponding greater amount of protection. Therefore, a sender may be “aware” of these levels and try to “guess” a proper amount of proof (e.g., difficulty) to be included with the message when communicated. Thus, senders of messages that do not look like spam may use relatively little proof while senders of messages that look like spam (e.g., a spammer) may use relatively larger amounts of proof. This improves the user experience for “good” users by allowing efficient use of proof that addresses the likely processing that will be performed on the message before the message is communicated.
In the following description, an exemplary environment is first described which is operable to employ the proof techniques. Exemplary procedures are then described which may operate in the exemplary environment, as well as in other environments.
Exemplary Environment
FIG. 1 is an illustration of an environment 100 operable for communication of messages across a network. The environment 100 is illustrated as including a plurality of clients 102(1), . . . , 102(N) that are communicatively coupled, one to another, over a network 104. The plurality of clients 102(1)-102(N) may be configured in a variety of ways. For example, one or more of the clients 102(1)-102(N) may be configured as a computer that is capable of communicating over the network 104, such as a desktop computer, a mobile station, a game console, an entertainment appliance, a set-top box communicatively coupled to a display device, a wireless phone, and so forth. The clients 102(1)-102(N) may range from full resource devices with substantial memory and processor resources (e.g., personal computers, television recorders equipped with hard disk) to low-resource devices with limited memory and/or processing resources (e.g., traditional set-top boxes). In the following discussion, the clients 102(1)-102(N) may also relate to a person and/or entity that operate the client. In other words, client 102(1)-102(N) may describe a logical client that includes a user, software and/or a machine.
Additionally, although the network 104 is illustrated as the Internet, the network may assume a wide variety of configurations. For example, the network 104 may include a wide area network (WAN), a local area network (LAN), a wireless network, a public telephone network, an intranet, and so on. Further, although a single network 104 is shown, the network 104 may be configured to include multiple networks. For instance, clients 102(1), 102(N) may be communicatively coupled via a peer-to-peer network to communicate, one to another. Each of the clients 102(1), 102(N) may also be communicatively coupled to one or more of a plurality of communication services 106(m) (where “m” can be any integer form one to “M”) over the Internet.
Each of the plurality of clients 102(1), . . . , 102(N) is illustrated as including a respective one of a plurality of communication modules 108(1), . . . , 108(N). In the illustrated implementation, each of the plurality of communication modules 108(1)-108(N) is executable on a respective one of the plurality of clients 102(1)-102(N) to send and receive messages. For example, one or more of the communication modules 108(1)-108(N) may be configured to send and receive email. As previously described, email employs standards and conventions for addressing and routing such that the email may be delivered across the network 104 utilizing a plurality of devices, such as routers, other computing devices (e.g., email servers), and so on. In this way, emails may be transferred within a company over an intranet, across the world using the Internet, and so on. An email, for instance, may include a header, text, and attachments, such as documents, computer-executable files, and so on. The header contains technical information about the source and oftentimes may describe the route the message took from sender to recipient.
In another example, one or more of the communication modules 108(1)-108(N) may be configured to send and receive instant messages. Instant messaging provides a mechanism such that each of the clients 102(1)-102(N), when participating in an instant messaging session, may send text messages to each other. The instant messages are typically communicated in real time, although delayed delivery may also be utilized, such as by logging the text messages when one of the clients 102(1)-102(N) is unavailable, e.g., offline. Thus, instant messaging may be thought of as a combination of email and Internet chat in that instant messaging supports message exchange and is designed for two-way live chats. Therefore, instant messaging may be utilized for synchronous communication. For instance, like a voice telephone call, an instant messaging session may be performed in real-time such that each user may respond to each other user as the instant messages are received.
In an implementation, the communication modules 106(1)-106(N) communicate with each other through use of the communication service 106(m). For example, client 102(1) may form a message using communication module 108(1) and send that message over the network 104 to the communication service 106(m) which is stored as one of a plurality of messages 110(j), where “j” can be any integer from one to “J”, in storage 112(m) through execution of a communication manager module 114(m). Client 102(N) may then “log on” to the communication service (e.g., by providing a name and password) and retrieve corresponding messages from storage 112(m) through execution of the communication module 108(N). A variety of other examples are also contemplated.
In another example, client 102(1) may cause the communication module 108(1) to form an instant message for communication to client 102(N). The communication module 108(1) is executed to communicate the instant message to the communication service 106(m), which then executes the communication manager module 114(m) to route the instant message to the client 102(N) over the network 104. The client 102(N) receives the instant message and executes the respective communication module 108(N) to display the instant message to a respective user. In another instance, when the clients 102(1), 102(N) are communicatively coupled directly, one to another (e.g., via a peer-to-peer network), the instant messages are communicated without utilizing the communication service 106(m). Although messages configured as emails and instant messages have been described, a variety of textual and non-textual messages (e.g., graphical messages, audio messages, and so on) may be communicated via the environment 100 without departing from the sprit and scope thereof. Additionally, computational proofs can be utilized for a wide variety of other communication techniques, such as to determine if a user will accept a voice-over-IP (VOIP) call or route the call to voicemail.
As previously described, the efficiently of the environment 100 has also resulted in communication of unwanted messages, commonly referred to as “spam”. Spam is typically provided via email that is sent to a large number of recipients, such as to promote a product or service. Thus, spam may be thought of as an electronic form of “junk” mail. Because a vast number of emails may be communicated through the environment 100 for little or no cost to the sender, a vast number of spammers are responsible for communicating a vast number of unwanted and irrelevant messages. Thus, each of the plurality of clients 102(1)-102(N) may receive a large number of these irrelevant messages, thereby hindering the client's interaction with actual messages of interest.
One technique which may be utilized to hinder the communication of unwanted messages is through the use of a computational proof, i.e., “proofs”. Proofs provide a technique that allows a sender of a message to prove their “non-spammer” intentions through use of a proof that enables the sender to indicate that a significant amount of hardware and/or software resources were expended by the client in the communication of the message. For example, clients 102(1)-102(N) are each illustrated as including a respective plurality of proofs 116(f), 116(g), where “f” and “g” can be any integer from one to “F” and “G”, respectively. Proof of effort algorithms generally involve use of a significant amount of computing resources (e.g., hardware and software resources) when solving a defined proof, e.g., a hash collision, a solution to a cryptographic problem, a solution to a memory bound problem, a solution to a reverse Turing test, and so on. As previously described, it typically requires few resources for a spammer to send a message. Therefore, by indicating that resources have been utilized by a sender of the message, the sender may indicate a decreased likelihood of being a spammer.
In the illustrated environment, the communication service 102(m) is also illustrated as including a plurality of proofs 116(h), where “h” can be any integer from one to “H”, which are stored in storage 118(m). Therefore, the communication service 102(m) in this instance may be used on part of one or more of the clients 102(1)-102(N) in the performance of the proofs 116(h). In another example, a third party 120 may also compute one or more of a plurality of proofs 116(i) (where “i” can be any integer from one to “I”) which are illustrated as stored in storage 122. For instance, the third party 120 may be configured as a web service to compute the proofs 116(i) when one or more of the clients 102(1)-102(N) is configured as a “thin” client as previously described. Therefore, the thin client may offload the computation of the proof to the third party to compute the proof. In another instance, the third party 120 is another computing device that is owned/accessible by the user (e.g., a desktop computer, work server, and so on) such that the user may transfer computation of the proofs between the user's computing devices before output to an intended recipient, such as from a wireless phone to a home computer, after which the message is then communicated for receipt by an intended recipient. A variety of other instances are also contemplated.
Because computation of the proofs indicates a decreased likelihood that a sender of the message is a “spammer”, spam filters employed in the environment 100 may take this into account when processing a message. For example, clients 102(1)-102(N) each include respective spam filters 124(1)-124(N) which are utilized to process messages received by the clients in order to “filter out” spam from legitimate messages. Spam filters 124(1)-124(N) may utilize a variety of techniques for filtering spam, such as through examination of message text, indicated sender, domains, and so on. The spam filters 124(1)-124(N), when processing the messages, may also take into account whether the message includes a result of a computational proof when determining whether the message is spam. Similar functionality may be employed by the spam filters 124(m) provided on the communication service 102(m). Therefore, a result of a computational proof may be utilized to obtain “safe passage” of the message through spam filters 124(1), 124(N), 124(m) employed in the environment 100.
Different amounts of resources, however, may be expended when computing different proofs 116(f), 116(g), 116(h), 116(i). For example, computation of a first one of the proofs 116(f) may more hardware and software resources than computation of another one of the proofs 116(f). Therefore, the spam filters 124(1)-124(N) may also be configured to address the amount of computation utilized to perform the respective proofs when determining whether or not a message is spam, further discussion of which may be found in relation to the following figure.
Generally, any of the functions described herein can be implemented using software, firmware (e.g., fixed logic circuitry), manual processing, or a combination of these implementations. The terms “module,” “functionality,” and “logic” as used herein generally represent software, firmware, or a combination of software and firmware. In the case of a software implementation, the module, functionality, or logic represents program code that performs specified tasks when executed on a processor (e.g., CPU or CPUs). The program code can be stored in one or more computer readable memory devices, further description of which may be found in relation to FIG. 2. The features of the proof strategies described below are platform-independent, meaning that the strategies may be implemented on a variety of commercial computing platforms having a variety of processors.
FIG. 2 is an illustration of a system 200 in an exemplary implementation showing the plurality of clients 102(n) and the communication service 106(m) of FIG. 1 in greater detail. Client 102(n) is representative of any of the plurality of clients 102(1)-102(N) of FIG. 1, and therefore reference will be made to client 102(n) in both singular and plural form. The communication service 102(m) is illustrated as being implemented by a plurality of servers 202(s), where “s” can be any integer from one to “S”, and the client 102(n) is illustrated as a client device. Further, the servers 202(s) and the clients 102(n) are illustrated as including respective processors 204(s), 206(n) and respective memory 208(s), 210(n).
Processors are not limited by the materials from which they are formed or the processing mechanisms employed therein. For example, processors may be comprised of semiconductor(s) and/or transistors (e.g., electronic integrated circuits (ICs)). In such a context, processor-executable instructions may be electronically-executable instructions. Alternatively, the mechanisms of or for processors, and thus of or for a computing device, may include, but are not limited to, quantum computing, optical computing, mechanical computing (e.g., using nanotechnology), and so forth. Additionally, although a single memory 208(s), 210(n) is shown for the respective server 202(s) and client 102(n), memory 208(s), 210(n) may be representative of a wide variety of types and combinations of memory may be employed, such as random access memory (RAM), hard disk memory, removable medium memory, and other computer-readable media.
The clients 102(n) are illustrated as executing the communication module 108(n) and the spam filters on the processor 206(n), which are also storable in memory 210(n). Additionally, the communication module 108(n) is illustrated as including a proof module 212(n), which is representative of functionality to select and perform proofs 116(1), . . . , 116(y), . . . , 116(Y) (which may or may not correspond to the proofs 116(f), 116(g) of FIG. 1). For example, the communication module 108(n), when executed, may be utilized to form a message for communication over the network 104. Before the message is communicated, the communication module 108(n) may process the message using the client's 102(n) spam filter 124(n) to determine a likelihood of whether the message, as is, will be considered spam by an intended recipient and/or a communication service 106(m) configured to communicate the message to the intended recipient. When the message is considered spam, the proof module 212(n) may perform one or more of the proofs 116(1)-116(Y), a result of which is then combined with the message before communication over the network 104. In this way, the client 102(n) may indicate, through the use of the proof, that the message is not spam and in this case the client makes the determination of whether to even perform one or more of the proofs 116(1)-116(Y) without contacting an intended recipient before hand. Further discussion of processing a message by a spam filter before communication over the network may be found in relation to FIG. 3.
As previously described, proofs 116(1)-116(Y) may require different amounts of resources to be performed, which is illustrated in FIG. 2 by an arrow 214 that indicates that proof 116(Y) is more resource intensive than proof 116(y), which is more resource intensive than proof 116(1). For example, different proof mechanisms may include parameters that specify a particular difficult, e.g., in a hash collision case an “N” bit collision may be utilized in which as “N” increases computation time increases exponentially. These differences in resource amounts may also be utilized in conjunction with an indication of a relative likelihood that the message will be considered spam to select an appropriate proof 116(1)-116(Y) to be performed before communication of the message. For example, a message that, when processed by the spam filter 124(n) indicates a relatively low likelihood of being considered spam may include a result of a proof 116(1) that consumes relatively low resources, when performed. On the other hand, a message that, when processed by the spam filter 124(n) indicates a relatively high likelihood of being considered spam may include a proof 116(Y) that consumes a relatively high amount of resources. In this way, the proof module 214 may select the proof 116(1)-116(Y), and even choose to forgo inclusion of a proof, in a manner which conserves resources of the client 102(n) yet still indicates that the client 102(n) is not a spammer. Further discussion of proof selection may be found in relation to FIG. 4.
The results of the proofs 116(1)-116(Y) may be combined with a variety of identifying mechanisms 216(x) that may also indicate a relative likelihood that a message is spam and/or sent by a spammer. For example, when a user receives a message, the communication modules 108(n) and/or manager module 114(m) gather and validate messages utilizing one or more applicable identifying mechanisms 216(x). For example, the identifying mechanisms 216(x) may involve checking that part of a message is signed with a specific private key, that a message was sent from a machine that is approved via a sender's identification for a specified domain, and so on. A variety of identifying mechanisms 216(x) and combinations thereof may be employed by the communication modules 108(n), 114(m), and/or the spam filters 124(n), 124(m), examples of which are described as follows.
Email Address
The email address is a standard form of identity. The email address may be checked by looking at a ‘FROM’ line in the header of a message. Although the email address may be particularly vulnerable to attack, a combination of the email address and another one of the identifying mechanisms 216(x) and/or the proofs 116(1)-116(Y) may result in substantial protection.
Third Party Certificates
Third party certificates may involve the signing of a portion of a message with a certificate that can be traced to a third-party certifier. This signature can be attached utilizing a variety of techniques, such as through secure/multipurpose Internet mail extension (S/MIME) techniques, e.g., by including a header in the message that contains the signature. The level of security provided by this technique may also be based on the reputation of the third party certifier, a type of certificate (e.g. some certifiers offer several levels of increasingly secure certification), and on the amount of the message signed (signing more of the message is presumably more secure).
Self-Signed Certificate
A self-signed certificate involves signing a portion of a message with a certificate that the sender created. Like a third-party certificate, this identifying mechanism may be attached using a variety of techniques, such as through secure/multipurpose Internet mail extension (S/MIME) techniques, e.g., by including a header in the message that contains the signature. In an implementation, use of a self-signed certificate involves the creation of a public/private key pair by a sender, signing part of the message with the private key, and distributing the public key in the message (or via other standard methods). The level of security provided by this method is based on the amount of the message signed.
Passcode
The passcode identifying mechanism involves the use of a passcode in a message, such as by including a public key in a message but not signing any portion of the message with the associated private key. This identity mechanism may be useful for users who have mail transfer agents that modify messages in transfer and destroy the cryptographic properties of signatures, such that the signatures cannot be verified. This identifying mechanism is useful as a lightweight way to establish a form of identity. Although a passcode is still potentially spoofable, the passcode may be utilized with other identifying mechanisms to provide greater likelihood of verification (i.e., authenticity of the sender's identity).
IP Address
The IP address identifying mechanism involves validating whether a message was sent from a particular IP address or IP address range (e.g. the IP/24 range 204.200.100.*). In an implementation, this identity mechanism may support a less secure mode in which the IP address/range may appear in any of a message's “received” header lines. As before, the use of a particular IP address, IP address range, and/or where the IP address or range may be located in a message can serve as a basis for a relative likelihood that the message was sent from a spammer.
Valid Sender ID
The valid Sender ID identifying mechanism involves validating whether a message was sent from a computer that is authorized to send messages for a particular domain via the Sender's ID. For example, reference may be made to a trusted domain. For instance, “test@test.com” is an address and “test.com” is the domain. It should be noted that the domain does not need to match exactly, e.g. the domain could also formatted as foo.test.com. When a message from this address is received, the communication module 108(n) may perform a Sender ID test on the “test.com” domain, and if the message matches the entry, it is valid. This identifying mechanism can also leverage algorithms for detecting IP addresses in clients and any forthcoming standards for communicating IP addresses from edge servers, standards for communicating the results of Sender ID checks from the edge servers, and so on. Additionally, it should be noted that the Sender ID test is not limited to any particular sender identification technique or framework (e.g., sender policy framework (SPF), sender ID framework from MICROSOFT (Microsoft is a trademark of the Microsoft Corporation, Redmond, Wash.), and so on), but may include any mechanism that provides for authentication of a user or domain.
Monetary Attachment
The monetary attachment identifying mechanism involves inclusion of a monetary amount to a message for sending, in what may be referred to as an “e-stamp”. For example, a sender of the message may attach a monetary amount to the message that is credited to the recipient. By attaching even a minimal monetary amount, the likelihood of a spammer sending a multitude of such messages may decrease, thereby increasing the probability that the sender is not a spammer. A variety of other techniques may also be employed for monetary attachment, such as through a central clearinghouse on the Internet that charges for certifying messages. Therefore, a certificate included with the message may act to verify that the sender paid an amount of money to send the message. Although a variety of identifying mechanisms have been described, a variety of other identifying mechanisms 216(x) may also be employed without departing from the sprit and scope thereof. Further discussion of message processing may be found in relation to the following figures.
Exemplary Procedures
The following discussion describes proof techniques that may be implemented utilizing the previously described systems and devices. Aspects of each of the procedures may be implemented in hardware, firmware, or software, or a combination thereof. The procedures are shown as a set of blocks that specify operations performed by one or more devices and are not necessarily limited to the orders shown for performing the operations by the respective blocks. It should also be noted that the following exemplary procedures may be implemented in a wide variety of other environments without departing from the spirit and scope thereof.
FIG. 3 depicts a procedure 300 in an exemplary implementation in which processing of a message is performed by local spam filters to determine whether a result of a proof should be included before communication of the message to an intended recipient. A message is formed for communication over a network (block 302). For example, the communication module 108(1) may be executed to compose an email, an instant message, and so on.
The message is then processed using one or more spam filters (block 403). The communication module 108(1), for instance, may forward the composed message to spam filters 124(1) that are local on the client 102(1). From the processing, an indication is received as to whether the message is considered to be spam (block 306). The indication, for instance, may be configured as a binary indictor (e.g., “yes” or “no”) as to whether the message is considered spam by that spam filter 124(1). Therefore, the indication is utilized to determine whether the message is considered spam (decision block 308).
When the message is not indicated as spam (“no” from decision block 308), the message is output for communication to an intended recipient over a network (block 310). Thus, the client 102(1) in this instance determines that the message is not likely to be considered spam by the intended recipient, and therefore may simply communicate the message without performing another action.
When the message is indicated as spam (“yes” from decision block 308), a proof is computed (block 312). A result of the computation and the message are then output for communication to an intended recipient over a network (block 314). Thus, in the instance the client 102(1) determines that the message is likely considered to be spam and therefore computes a proof to indicate the “non-spammer” intentions of the client 102(1).
Although a binary indication was described as being output from the spam filters, a relative likelihood (e.g., a score) may also be output and leveraged by the computational proofs. For example, an additional threshold may be utilized in conjunction with the spam filter's indication to protect from spam filters that are likely to be more aggressive than the spam filter employed by the client 102(1), such as spam filter employed by a communication service 106(m). In this way, the additional threshold may account for out-of-date spam filters that find the message “more spammy” than the sender's filter. For instance, the threshold may be based on an update frequency of the spam filter 124(1), with more rapid updates requiring smaller thresholds.
Additionally, logic may be employed for specific intended recipients and/or communicators of the message. For instance, a particular communication service may filter more aggressively, and therefore a larger threshold may be employed. In an implementation, messages that are sent to recipients within a local domain are not pre-processed, e.g., when recipients are located on a global address list, when recipients are included in a local domain of a sender, and son on. A variety of other instances are also contemplated, an example of which is described as follows.
FIG. 4 depicts a procedure 400 in an exemplary implementation in which one or more proofs are selected based on a relative likelihood that a message will be considered spam. In the previous example, an implementation was described in which an indication of “spamminess” of a message may be relative, such as provided by a score in which higher numbers indicate an increased likelihood of being spam. This relative likelihood may also be utilized to select one or more proofs such that different “levels” of proof may be employed based on the relative likelihood of the message being considered spam. As before, a message is processed by one or more spam filters (block 402) and an indication is received of a relative likelihood that the message is considered to be spam (block 404), such as a numerical score, a relative indication of a degree of “spamminess”, and so on.
One or more of a plurality of proofs are then selected based on the relative likelihood (block 406). Thus, the communication module 108(1) may determine a level of proof that is proportion to the apparent “spamminess” of the message. For example, if the message is almost certainly not spam, the client 102(1) may select a proof requiring a minimal amount of resources to compute. However, if the message is significantly “spammy”, the client 102(1) may select one or more proofs requiring a significantly greater amount of resources to compute. The selected one or more proofs are then computed (block 408) and the message and a result of the computation is output for communication to an intended recipient over a network (block 410).
Thus, in this example, the “amount” of proof is selected based on a guess as to how much proof will be required to bypass the intended recipient's, as well as communication services that communicate the message, spam filters. This guess may also be based on the local spam filter 124(1) (e.g., is it up-to-date), knowledge of receiver's filters (e.g., the communication service 106(m) employs aggressive spam filters), and so on. In the previous example, the computations performed were “sender driven”, in that, the sender (e.g., client 102(1)) made a guess as to whether the recipients (e.g., communication service 106(m) and client 102(N)) would consider the message to be spam. This determination may also be made, at least in part, through communication with a recipient of the message, an example of which is described in relation to the following figure.
FIG. 5 depicts a procedure 500 in an exemplary implementation in which receiver-driven computation is performed. A message is received over a network (block 502) and processed using one or more spam filters (block 504). For example, the communication service 106(m) may receive a message from client 102(1) and process the message using the spam filters 124(m). An indication is then received of a relative likelihood that the message is spam (block 506).
Based at least in part on the indication, a determination is made as to an amount of proof to be associated with the message such that the message is not considered spam (block 508). For instance, the indication may be configured as a numerical score, which may then be utilized to determine a proportional amount of proof (e.g., more or less computation) such that, when included, the message is not considered to be spam. Additional indicators may also be utilized when making this determination, such as through use of the identity mechanisms 216(x) previously described in relation to FIG. 2. Thus, a variety of factors may be utilized to determine the “amount” of proof to be included with the message.
A determination is then made as to whether the message includes the amount (decision block 510). If so (“yes” from decision block 512), the message is routed accordingly, e.g., to a client's inbox. If not (“no” from decision block 512), a communication is formed to be communicated to a sender of the message to request additional computation (block 514). Thus, in this instance, a receiver (e.g., a communication service 102(m) and/or the client 102(N) that is the intended recipient) may report back that additional proof is needed before further processing and/or routing, e.g., passing to an inbox, pushing to the intended recipient, and so forth. In other words, the recipient may communicate back that the sender's “guess” was wrong. Further, the recipient may also “give credit” to previous amounts of “proof” that were included in the message when requiring the additional proof, e.g., the sender's guess plus the additional proof required equals the minimum amount of proof needed to allow the message to be routed to a user's inbox. Thus, this cost may put an asymmetric burden of proof on spammers because receivers will require larger amount of proof before the receiver is willing to place a “spammy” message in the intended recipient's inbox.
These techniques may also be employed to address a situation, in which, the spam filters are not synchronized, e.g., one spam filter has been updated and another one has not. For example, due to a lack of synchronization, the sender (e.g., client 102(1)) might “guess” incorrectly, and therefore messages sent by the sender may end up in the intended recipients' (e.g., client 102(N)) “junk” mail folder. Therefore, by requesting additional proof, this situation may be avoided.
In an implementation, a recipient (e.g., the communication service 102(m) and/or the intended recipient, client 102(N)) may choose not to inform the sender (e.g., client 102(1)) that addition proof is required in order to avoid “web bugs” (i.e., techniques that spammers use to determine when a receiver reads a message) and address book mining (i.e., techniques used by spammers to determine when an account is live, and thus worth spamming). In such an instance, the recipient may require a certain minimum amount of proof before requesting additional proof from a sender. Thus, the amount of initial proof may be set such that using receiver-driven computation as a surrogate for web bugs and address book mining is uneconomical for spammers. In another example, the “challenge” may be limited to instances in which the sender indicated a willingness to receive challenges, such as in an email header field.
CONCLUSION
Although the invention has been described in language specific to structural features and/or methodological acts, it is to be understood that the invention defined in the appended claims is not necessarily limited to the specific features or acts described. Rather, the specific features and acts are disclosed as exemplary forms of implementing the claimed invention.

Claims (19)

1. A method comprising:
processing an outgoing message using one or more spam filters on a client computer;
computing, via the client computer, a result from a proof to be included with the outgoing message that is communicated over a network to an intended recipient when the processing indicates that the outgoing message is considered spam, wherein an extent of processing performed by computing resources used to calculate the proof varies based on the relative probability the outgoing message is spam and the indicated relative probability the outgoing message is spam includes an additional threshold to protect from spam filters that are likely to be more aggressive than the one or more spam filters employed by the client computer;
selecting the proof from a plurality of proofs based on the indicated relative probability the outgoing message is spam, such that selecting includes:
selecting a complex proof that uses a higher amount of computer resources as the proof when the relative probability the outgoing message is spam is higher; and
selecting a simple proof that uses a lower amount of computer resources as the proof when the relative probability the outgoing message is spam is lower;
attaching the result from the proof with the outgoing message before the outgoing message is sent to the intended recipient, the result from the proof being included in the outgoing message as evidence that a sender of the outgoing message expended additional computer resources to indicate the outgoing message is less likely to be spam; and
receiving a communication from the intended recipient indicating that the result attached to the outgoing message is not enough evidence that the message is less likely to be spam.
2. A method as described in claim 1, wherein the outgoing message is an email or an instant message.
3. A method as described in claim 1, wherein the proof is a Proof of Effort (POE) algorithm.
4. A method as described in claim 1, wherein the processing and the computing are performed:
on a client that composed the outgoing message; and
before the outgoing message is communicated over the network.
5. A method as described in claim 1, wherein:
the processing indicates a relative probability that the outgoing message is spam; and
the proof is selected from a plurality of proofs based on the indicated relative probability and an identity mechanism utilized by the outgoing message to identify a sender of the message.
6. A method as described in claim 1, further comprising outputting the outgoing message and the result of the computation to be communicated to an intended recipient over the network.
7. A method comprising:
determining a relative probability that an outgoing message is spam by processing the outgoing message using one or more spam filters of a computing device;
selecting one or more proofs for the outgoing message, a result of the one or more proofs to be computed by the computing device based on the relative probability the outgoing message is spam, wherein the proof for the outgoing message is an extraneous operation performed by a client computer that is unrelated to the computing device that determines the relative probability that the outgoing message is spam;
wherein a complex proof that uses a higher amount of computer resources is selected for the outgoing message from the one or more proofs when the relative probability the outgoing message is spam is higher;
wherein a simple proof that uses a lower amount of computer resources is selected for the outgoing message from the one or more proofs when the relative probability the outgoing message is spam is lower;
including the result of the one or more proofs with the outgoing message before the outgoing message is communicated over a network to an intended recipient; and
receiving a response from the intended recipient over the network indicating that the result of the one or more proofs for the outgoing message included with the outgoing message does not represent a sufficient amount of computer resources to indicate that the outgoing message is less likely to be spam.
8. A method as described in claim 7, wherein the determining and the selecting are performed:
before the outgoing message is received by an intended recipient; and
by a client that composed the outgoing message before communication over a network.
9. A method as described in claim 8, wherein the determining is not performed when an intended recipient of the outgoing message is in a same domain as a sender of the outgoing message.
10. A method as described in claim 8, wherein the determining and the selecting are performed by a communication service that receives the outgoing message from a client that composed the outgoing message.
11. A method as described in claim 7, wherein the determining is based at least in part on an identity mechanism utilized by the outgoing message to identify a sender of the outgoing message.
12. A method as described in claim 7, wherein each said proof is a Proof of Effort (POE) algorithm.
13. A method as described in claim 7, further comprising outputting the outgoing message and a result of the computation of the selected one or more said proofs for communication to the intended recipient over a network.
14. Computer memory device storing computer-executable instructions that, when executed on one or more processors, performs acts comprising:
computing a result from a proof for an outgoing message, the result to be included with the outgoing message that is communicated over a network to an intended recipient upon an indication that the outgoing message has a relative probability of being spam;
determining a relative amount of requested processing to be performed by computing resources used to calculate the result of the proof for the outgoing message based on the relative probability that the outgoing message is spam, the proof for the outgoing message being at least one of a hash collision, a solution to a cryptographic problem, a solution to a memory bound problem, or a solution to a reverse Turing test;
selecting the proof for the outgoing message from a plurality of proofs based on the relative amount of requested processing such that a complex proof that uses a larger amount of computer resources is selected as the proof for the outgoing message when the relative probability the outgoing message is spam is higher, and a simple proof that uses a smaller amount of computer resources is selected as the proof for the outgoing message when the relative probability the outgoing message is spam is lower;
attaching the result from the proof for the outgoing message with the outgoing message before the outgoing message is sent to the intended recipient, the result from the proof for the outgoing message being included in the outgoing message as evidence that a sender of the outgoing message expended additional computer resources to indicate the outgoing message is less likely to be spam;
sending the outgoing message with the attached result to the intended recipient;
receiving a reply from the intended recipient in response to the sending, the reply indicating that the result of the proof for the outgoing message attached to the outgoing message is not enough evidence that the outgoing message is less likely to be spam; and
sending an updated outgoing message to the intended recipient, the updated outgoing message including the result of the proof for the outgoing message and further including an additional result of an additional proof for the outgoing message computed in response to the receiving the reply.
15. A method as described in claim 1, further comprising:
computing, via the client computer, an additional result from an additional proof to be included with the outgoing message that is communicated over the network to the intended recipient; and
attaching the additional result from the additional proof along with the result to the outgoing message before the outgoing message is sent again to the intended recipient, the additional result included along with the result in the outgoing message as further evidence that the outgoing message is less likely to be spam.
16. A method as described in claim 7, further comprising including an additional result of an additional proof along with the result of the one or more proofs in a re-communication of the outgoing message over the network to the intended recipient in response to the receiving the response.
17. A method as described in claim 1, wherein at least one of:
the processing includes determining, by the one or more spam filters on the client computer, whether to perform one or more proofs; or
the processing is based, at least in part, on an up-to-date status of the one or more spam filters on the client computer.
18. A method as described in claim 1, wherein the determining includes receiving an initial estimation of a sender that a level of proof for the outgoing message will be sufficient for the outgoing message to the intended recipient.
19. A method as described in claim 1, wherein the spam filters that are likely to be more aggressive operate on a computing device other than the client computer.
US11/265,842 2005-11-03 2005-11-03 Proofs to filter spam Expired - Fee Related US8065370B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US11/265,842 US8065370B2 (en) 2005-11-03 2005-11-03 Proofs to filter spam

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US11/265,842 US8065370B2 (en) 2005-11-03 2005-11-03 Proofs to filter spam

Publications (2)

Publication Number Publication Date
US20070100949A1 US20070100949A1 (en) 2007-05-03
US8065370B2 true US8065370B2 (en) 2011-11-22

Family

ID=37997874

Family Applications (1)

Application Number Title Priority Date Filing Date
US11/265,842 Expired - Fee Related US8065370B2 (en) 2005-11-03 2005-11-03 Proofs to filter spam

Country Status (1)

Country Link
US (1) US8065370B2 (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080270209A1 (en) * 2007-04-25 2008-10-30 Michael Jon Mauseth Merchant scoring system and transactional database
US20110225076A1 (en) * 2010-03-09 2011-09-15 Google Inc. Method and system for detecting fraudulent internet merchants
US20120158851A1 (en) * 2010-12-21 2012-06-21 Daniel Leon Kelmenson Categorizing Social Network Objects Based on User Affiliations
US8396935B1 (en) * 2012-04-10 2013-03-12 Google Inc. Discovering spam merchants using product feed similarity
US9811830B2 (en) 2013-07-03 2017-11-07 Google Inc. Method, medium, and system for online fraud prevention based on user physical location data
US10621181B2 (en) * 2014-12-30 2020-04-14 Jpmorgan Chase Bank Usa, Na System and method for screening social media content
USRE49334E1 (en) 2005-10-04 2022-12-13 Hoffberg Family Trust 2 Multifactorial optimization system and method

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8484295B2 (en) 2004-12-21 2013-07-09 Mcafee, Inc. Subscriber reputation filtering method for analyzing subscriber activity and detecting account misuse
US7953814B1 (en) * 2005-02-28 2011-05-31 Mcafee, Inc. Stopping and remediating outbound messaging abuse
US9015472B1 (en) 2005-03-10 2015-04-21 Mcafee, Inc. Marking electronic messages to indicate human origination
US9160755B2 (en) * 2004-12-21 2015-10-13 Mcafee, Inc. Trusted communication network
US8738708B2 (en) * 2004-12-21 2014-05-27 Mcafee, Inc. Bounce management in a trusted communication network
US20060253597A1 (en) * 2005-05-05 2006-11-09 Mujica Technologies Inc. E-mail system
US7734703B2 (en) * 2006-07-18 2010-06-08 Microsoft Corporation Real-time detection and prevention of bulk messages
US8346875B2 (en) * 2007-10-05 2013-01-01 Saar Gillai Intelligence of the crowd electronic mail management system
US10354229B2 (en) * 2008-08-04 2019-07-16 Mcafee, Llc Method and system for centralized contact management

Citations (206)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5377354A (en) 1989-08-15 1994-12-27 Digital Equipment Corporation Method and system for sorting and prioritizing electronic mail messages
US5459717A (en) 1994-03-25 1995-10-17 Sprint International Communications Corporation Method and apparatus for routing messagers in an electronic messaging system
US5619648A (en) 1994-11-30 1997-04-08 Lucent Technologies Inc. Message filtering techniques
US5638487A (en) 1994-12-30 1997-06-10 Purespeech, Inc. Automatic speech recognition
US5704017A (en) 1996-02-16 1997-12-30 Microsoft Corporation Collaborative filtering utilizing a belief network
US5805801A (en) 1997-01-09 1998-09-08 International Business Machines Corporation System and method for detecting and preventing security
US5835087A (en) 1994-11-29 1998-11-10 Herz; Frederick S. M. System for generation of object profiles for a system for customized electronic identification of desirable objects
US5884033A (en) 1996-05-15 1999-03-16 Spyglass, Inc. Internet filtering system for filtering data transferred over the internet utilizing immediate and deferred filtering actions
US5905859A (en) 1997-01-09 1999-05-18 International Business Machines Corporation Managed network device security method and apparatus
US5911776A (en) 1996-12-18 1999-06-15 Unisys Corporation Automatic format conversion system and publishing methodology for multi-user network
US5930471A (en) 1996-12-26 1999-07-27 At&T Corp Communications system and method of operation for electronic messaging using structured response objects and virtual mailboxes
US5999967A (en) 1997-08-17 1999-12-07 Sundsted; Todd Electronic mail filtering by electronic stamp
US5999932A (en) 1998-01-13 1999-12-07 Bright Light Technologies, Inc. System and method for filtering unsolicited electronic mail messages using data matching and heuristic processing
US6003027A (en) 1997-11-21 1999-12-14 International Business Machines Corporation System and method for determining confidence levels for the results of a categorization system
US6023723A (en) 1997-12-22 2000-02-08 Accepted Marketing, Inc. Method and system for filtering unwanted junk e-mail utilizing a plurality of filtering mechanisms
US6041321A (en) 1996-10-15 2000-03-21 Sgs-Thomson Microelectronics S.R.L. Electronic device for performing convolution operations
US6041324A (en) 1997-11-17 2000-03-21 International Business Machines Corporation System and method for identifying valid portion of computer resource identifier
US6047242A (en) 1997-05-28 2000-04-04 Siemens Aktiengesellschaft Computer system for protecting software and a method for protecting software
US6052709A (en) 1997-12-23 2000-04-18 Bright Light Technologies, Inc. Apparatus and method for controlling delivery of unsolicited electronic mail
US6072942A (en) 1996-09-18 2000-06-06 Secure Computing Corporation System and method of electronic mail filtering using interconnected nodes
US6101531A (en) 1995-12-19 2000-08-08 Motorola, Inc. System for communicating user-selected criteria filter prepared at wireless client to communication server for filtering data transferred from host to said wireless client
US6112227A (en) 1998-08-06 2000-08-29 Heiner; Jeffrey Nelson Filter-in method for reducing junk e-mail
US6122657A (en) 1997-02-04 2000-09-19 Networks Associates, Inc. Internet computer system with methods for dynamic filtering of hypertext tags and content
US6128608A (en) 1998-05-01 2000-10-03 Barnhill Technologies, Llc Enhancing knowledge discovery using multiple support vector machines
US6144934A (en) 1996-09-18 2000-11-07 Secure Computing Corporation Binary filter using pattern recognition
US6161130A (en) 1998-06-23 2000-12-12 Microsoft Corporation Technique which utilizes a probabilistic classifier to detect "junk" e-mail by automatically updating a training and re-training the classifier based on the updated training set
US6167434A (en) 1998-07-15 2000-12-26 Pang; Stephen Y. Computer code for removing junk e-mail messages
US6192114B1 (en) 1998-09-02 2001-02-20 Cbt Flint Partners Method and apparatus for billing a fee to a party initiating an electronic mail communication when the party is not on an authorization list associated with the party to whom the communication is directed
US6192360B1 (en) 1998-06-23 2001-02-20 Microsoft Corporation Methods and apparatus for classifying text and for building a text classifier
US6195698B1 (en) 1998-04-13 2001-02-27 Compaq Computer Corporation Method for selectively restricting access to computer systems
US6199103B1 (en) 1997-06-24 2001-03-06 Omron Corporation Electronic mail determination method and system and storage medium
US6199102B1 (en) 1997-08-26 2001-03-06 Christopher Alan Cobb Method and system for filtering electronic messages
JP2001505371A (en) 1995-05-08 2001-04-17 コンプサーブ、インコーポレーテッド Regulatory electronic message management device
US6249807B1 (en) 1998-11-17 2001-06-19 Kana Communications, Inc. Method and apparatus for performing enterprise email management
US6266692B1 (en) 1999-01-04 2001-07-24 International Business Machines Corporation Method for blocking all unwanted e-mail (SPAM) using a header-based password
US6308273B1 (en) 1998-06-12 2001-10-23 Microsoft Corporation Method and system of security location discrimination
US6314421B1 (en) 1998-05-12 2001-11-06 David M. Sharnoff Method and apparatus for indexing documents for message filtering
US20010039575A1 (en) 1998-02-04 2001-11-08 Thomas Freund Apparatus and method for scheduling and dispatching queued client requests within a server in a client/server computer system
US6321267B1 (en) 1999-11-23 2001-11-20 Escom Corporation Method and apparatus for filtering junk email
US6324569B1 (en) 1998-09-23 2001-11-27 John W. L. Ogilvie Self-removing email verified or designated as such by a message distributor for the convenience of a recipient
US20010046307A1 (en) 1998-04-30 2001-11-29 Hewlett-Packard Company Method and apparatus for digital watermarking of images
US6327617B1 (en) 1995-11-27 2001-12-04 Microsoft Corporation Method and system for identifying and obtaining computer software from a remote computer
US20010049745A1 (en) 2000-05-03 2001-12-06 Daniel Schoeffler Method of enabling transmission and reception of communication when current destination for recipient is unknown to sender
US6330590B1 (en) 1999-01-05 2001-12-11 William D. Cotten Preventing delivery of unwanted bulk e-mail
US6332164B1 (en) 1997-10-24 2001-12-18 At&T Corp. System for recipient control of E-mail message by sending complete version of message only with confirmation from recipient to receive message
US20020016824A1 (en) 1997-11-25 2002-02-07 Robert G. Leeds Junk electronic mail detector and eliminator
US6351740B1 (en) 1997-12-01 2002-02-26 The Board Of Trustees Of The Leland Stanford Junior University Method and system for training dynamic nonlinear adaptive filters which have embedded memory
US6370526B1 (en) 1999-05-18 2002-04-09 International Business Machines Corporation Self-adaptive method and system for providing a user-preferred ranking order of object sets
US20020059425A1 (en) 2000-06-22 2002-05-16 Microsoft Corporation Distributed computing services platform
US20020073157A1 (en) 2000-12-08 2002-06-13 Newman Paula S. Method and apparatus for presenting e-mail threads as semi-connected text by removing redundant material
US20020091738A1 (en) 2000-06-12 2002-07-11 Rohrabaugh Gary B. Resolution independent vector display of internet content
US6421709B1 (en) 1997-12-22 2002-07-16 Accepted Marketing, Inc. E-mail filter and method thereof
US6424997B1 (en) 1999-01-27 2002-07-23 International Business Machines Corporation Machine learning based electronic messaging system
US6434600B2 (en) 1998-09-15 2002-08-13 Microsoft Corporation Methods and systems for securely delivering electronic mail to hosts having dynamic IP addresses
US20020124025A1 (en) 2001-03-01 2002-09-05 International Business Machines Corporataion Scanning and outputting textual information in web page images
US6449635B1 (en) 1999-04-21 2002-09-10 Mindarrow Systems, Inc. Electronic mail deployment system
US20020129111A1 (en) 2001-01-15 2002-09-12 Cooper Gerald M. Filtering unsolicited email
US6453327B1 (en) 1996-06-10 2002-09-17 Sun Microsystems, Inc. Method and apparatus for identifying and discarding junk electronic mail
US20020147782A1 (en) 2001-03-30 2002-10-10 Koninklijke Philips Electronics N.V. System for parental control in video programs based on multimedia content information
US6477551B1 (en) 1999-02-16 2002-11-05 International Business Machines Corporation Interactive electronic messaging system
JP2002537727A (en) 1999-02-17 2002-11-05 アーゴウ インターラクティブ リミテッド Electronic mail proxy and filter device and method
US20020169954A1 (en) 1998-11-03 2002-11-14 Bandini Jean-Christophe Denis Method and system for e-mail message transmission
US6484197B1 (en) 1998-11-07 2002-11-19 International Business Machines Corporation Filtering incoming e-mail
US6484261B1 (en) 1998-02-17 2002-11-19 Cisco Technology, Inc. Graphical network security policy management
US20020174185A1 (en) 2001-05-01 2002-11-21 Jai Rawat Method and system of automating data capture from electronic correspondence
US20020184315A1 (en) 2001-03-16 2002-12-05 Earnest Jerry Brett Redundant email address detection and capture system
US20020199095A1 (en) 1997-07-24 2002-12-26 Jean-Christophe Bandini Method and system for filtering communication
US20030009495A1 (en) 2001-06-29 2003-01-09 Akli Adjaoute Systems and methods for filtering electronic content
US20030009698A1 (en) 2001-05-30 2003-01-09 Cascadezone, Inc. Spam avenger
US20030016872A1 (en) 2001-07-23 2003-01-23 Hung-Ming Sun Method of screening a group of images
TW519591B (en) 2001-06-26 2003-02-01 Wistron Corp Virtual e-mail server system
US6519580B1 (en) 2000-06-08 2003-02-11 International Business Machines Corporation Decision-tree-based symbolic rule induction system for text categorization
TW520483B (en) 2000-09-13 2003-02-11 He-Shin Liau Computer program management system
US20030037074A1 (en) 2001-05-01 2003-02-20 Ibm Corporation System and method for aggregating ranking results from various sources to improve the results of web searching
TW521213B (en) 2000-03-27 2003-02-21 Agc Technology Inc Portable electronics information transmission
US20030041126A1 (en) 2001-05-15 2003-02-27 Buford John F. Parsing of nested internet electronic mail documents
US6546416B1 (en) 1998-12-09 2003-04-08 Infoseek Corporation Method and system for selectively blocking delivery of bulk electronic mail
US6546390B1 (en) 1999-06-11 2003-04-08 Abuzz Technologies, Inc. Method and apparatus for evaluating relevancy of messages to users
US20030088627A1 (en) 2001-07-26 2003-05-08 Rothwell Anton C. Intelligent SPAM detection system using an updateable neural analysis engine
US6592627B1 (en) 1999-06-10 2003-07-15 International Business Machines Corporation System and method for organizing repositories of semi-structured documents such as email
US20030149733A1 (en) 1999-01-29 2003-08-07 Digital Impact Method and system for remotely sensing the file formats processed by an e-mail client
US6615242B1 (en) 1998-12-28 2003-09-02 At&T Corp. Automatic uniform resource locator-based message filter
US6618747B1 (en) 1998-11-25 2003-09-09 Francis H. Flynn Electronic communication delivery confirmation and verification system
US20030191969A1 (en) 2000-02-08 2003-10-09 Katsikas Peter L. System for eliminating unauthorized electronic mail
US6633855B1 (en) 2000-01-06 2003-10-14 International Business Machines Corporation Method, system, and program for filtering content using neural networks
US20030204569A1 (en) 2002-04-29 2003-10-30 Michael R. Andrews Method and apparatus for filtering e-mail infected with a previously unidentified computer virus
US6643686B1 (en) 1998-12-18 2003-11-04 At&T Corp. System and method for counteracting message filtering
US6654787B1 (en) 1998-12-31 2003-11-25 Brightmail, Incorporated Method and apparatus for filtering e-mail
US20030229672A1 (en) 2002-06-05 2003-12-11 Kohn Daniel Mark Enforceable spam identification and reduction system, and method thereof
US20040003283A1 (en) * 2002-06-26 2004-01-01 Goodman Joshua Theodore Spam detector with challenges
US20040015554A1 (en) 2002-07-16 2004-01-22 Brian Wilson Active e-mail filter with challenge-response
US6684201B1 (en) 2000-03-31 2004-01-27 Microsoft Corporation Linguistic disambiguation system and method using string-based pattern training to learn to resolve ambiguity sites
US20040019651A1 (en) 2002-07-29 2004-01-29 Andaker Kristian L. M. Categorizing electronic messages based on collaborative feedback
US6691156B1 (en) 2000-03-10 2004-02-10 International Business Machines Corporation Method for restricting delivery of unsolicited E-mail
US6701350B1 (en) 1999-09-08 2004-03-02 Nortel Networks Limited System and method for web page filtering
US6701440B1 (en) 2000-01-06 2004-03-02 Networks Associates Technology, Inc. Method and system for protecting a computer using a remote e-mail scanning device
US6704772B1 (en) 1999-09-20 2004-03-09 Microsoft Corporation Thread based email
US20040054887A1 (en) 2002-09-12 2004-03-18 International Business Machines Corporation Method and system for selective email acceptance via encoded email identifiers
US20040059697A1 (en) 2002-09-24 2004-03-25 Forman George Henry Feature selection for two-class classification systems
US20040068543A1 (en) * 2002-10-03 2004-04-08 Ralph Seifert Method and apparatus for processing e-mail
US20040073617A1 (en) 2000-06-19 2004-04-15 Milliken Walter Clark Hash-based systems and methods for detecting and preventing transmission of unwanted e-mail
US6728690B1 (en) 1999-11-23 2004-04-27 Microsoft Corporation Classification system trainer employing maximum margin back-propagation with probabilistic outputs
US20040083270A1 (en) 2002-10-23 2004-04-29 David Heckerman Method and system for identifying junk e-mail
US6732273B1 (en) 1998-10-21 2004-05-04 Lucent Technologies Inc. Priority and security coding system for electronic mail messages
US6732149B1 (en) 1999-04-09 2004-05-04 International Business Machines Corporation System and method for hindering undesired transmission or receipt of electronic messages
US6732157B1 (en) 2002-12-13 2004-05-04 Networks Associates Technology, Inc. Comprehensive anti-spam system, method, and computer program product for filtering unwanted e-mail messages
US20040093371A1 (en) 2002-11-08 2004-05-13 Microsoft Corporation. Memory bound functions for spam deterrence and the like
US6742047B1 (en) 1997-03-27 2004-05-25 Intel Corporation Method and apparatus for dynamically filtering network content
US6748422B2 (en) 2000-10-19 2004-06-08 Ebay Inc. System and method to control sending of unsolicited communications relating to a plurality of listings in a network-based commerce facility
US6751348B2 (en) 2001-03-29 2004-06-15 Fotonation Holdings, Llc Automated detection of pornographic images
WO2004054188A1 (en) 2002-12-10 2004-06-24 Mk Secure Solutions Ltd Electronic mail system
US6757830B1 (en) 2000-10-03 2004-06-29 Networks Associates Technology, Inc. Detecting unwanted properties in received email messages
US20040139165A1 (en) 2003-01-09 2004-07-15 Microsoft Corporation Framework to enable integration of anti-spam technologies
US20040139160A1 (en) 2003-01-09 2004-07-15 Microsoft Corporation Framework to enable integration of anti-spam technologies
US6768991B2 (en) 2001-05-15 2004-07-27 Networks Associates Technology, Inc. Searching for sequences of character data
US20040148330A1 (en) 2003-01-24 2004-07-29 Joshua Alspector Group based spam classification
US6775704B1 (en) 2000-12-28 2004-08-10 Networks Associates Technology, Inc. System and method for preventing a spoofed remote procedure call denial of service attack in a networked computing environment
US6779021B1 (en) 2000-07-28 2004-08-17 International Business Machines Corporation Method and system for predicting and managing undesirable electronic mail
US6785820B1 (en) 2002-04-02 2004-08-31 Networks Associates Technology, Inc. System, method and computer program product for conditionally updating a security program
US20040177120A1 (en) 2003-03-07 2004-09-09 Kirsch Steven T. Method for filtering e-mail messages
US20040181571A1 (en) * 2003-03-12 2004-09-16 Atkinson Robert George Reducing unwanted and unsolicited electronic messages by preventing connection hijacking and domain spoofing
US20040193684A1 (en) 2003-03-26 2004-09-30 Roy Ben-Yoseph Identifying and using identities deemed to be known to a user
US20040199585A1 (en) 2001-06-29 2004-10-07 Bing Wang Apparatus and method for handling electronic mail
US20040199594A1 (en) 2001-06-21 2004-10-07 Radatti Peter V. Apparatus, methods and articles of manufacture for intercepting, examining and controlling code, data and files and their transfer
US20040205135A1 (en) 2003-03-25 2004-10-14 Hallam-Baker Phillip Martin Control and management of electronic messaging
US20040210640A1 (en) 2003-04-17 2004-10-21 Chadwick Michael Christopher Mail server probability spam filter
US20040215977A1 (en) 2003-03-03 2004-10-28 Goodman Joshua T. Intelligent quarantining for spam prevention
US20040255122A1 (en) 2003-06-12 2004-12-16 Aleksandr Ingerman Categorizing electronic messages based on trust between electronic messaging entities
US20040260776A1 (en) 2003-06-23 2004-12-23 Starbuck Bryan T. Advanced spam detection techniques
US6842773B1 (en) 2000-08-24 2005-01-11 Yahoo ! Inc. Processing of textual electronic communication distributed in bulk
US20050015456A1 (en) * 2002-08-30 2005-01-20 Martinson John Robert System and method for eliminating unsolicited junk or spam electronic mail
US20050015455A1 (en) 2003-07-18 2005-01-20 Liu Gary G. SPAM processing system and methods including shared information among plural SPAM filters
US20050021649A1 (en) * 2003-06-20 2005-01-27 Goodman Joshua T. Prevention of outgoing spam
US6853749B2 (en) 2000-12-13 2005-02-08 Panasonic Communications Co. Ltd. Information communications apparatus
US20050041789A1 (en) 2003-08-19 2005-02-24 Rodney Warren-Smith Method and apparatus for filtering electronic mail
US20050050150A1 (en) 2003-08-29 2005-03-03 Sam Dinkin Filter, system and method for filtering an electronic mail message
US6868498B1 (en) 1999-09-01 2005-03-15 Peter L. Katsikas System for eliminating unauthorized electronic mail
US20050060643A1 (en) 2003-08-25 2005-03-17 Miavia, Inc. Document similarity detection and classification system
US20050076084A1 (en) 2003-10-03 2005-04-07 Corvigo Dynamic message filtering
US20050080855A1 (en) 2003-10-09 2005-04-14 Murray David J. Method for creating a whitelist for processing e-mails
US20050080889A1 (en) 2003-10-14 2005-04-14 Malik Dale W. Child protection from harmful email
US20050081059A1 (en) 1997-07-24 2005-04-14 Bandini Jean-Christophe Denis Method and system for e-mail filtering
US20050091321A1 (en) 2003-10-14 2005-04-28 Daniell W. T. Identifying undesired email messages having attachments
US20050091320A1 (en) 2003-10-09 2005-04-28 Kirsch Steven T. Method and system for categorizing and processing e-mails
US20050097174A1 (en) 2003-10-14 2005-05-05 Daniell W. T. Filtered email differentiation
US6892193B2 (en) 2001-05-10 2005-05-10 International Business Machines Corporation Method and apparatus for inducing classifiers for multimedia based on unified representation of features reflecting disparate modalities
US20050102366A1 (en) 2003-11-07 2005-05-12 Kirsch Steven T. E-mail filter employing adaptive ruleset
US20050108340A1 (en) 2003-05-15 2005-05-19 Matt Gleeson Method and apparatus for filtering email spam based on similarity measures
US20050114452A1 (en) 2003-11-03 2005-05-26 Prakash Vipul V. Method and apparatus to block spam based on spam reports from a community of users
US6901398B1 (en) 2001-02-12 2005-05-31 Microsoft Corporation System and method for constructing and personalizing a universal information classifier
US20050120019A1 (en) 2003-11-29 2005-06-02 International Business Machines Corporation Method and apparatus for the automatic identification of unsolicited e-mail messages (SPAM)
US6920477B2 (en) 2001-04-06 2005-07-19 President And Fellows Of Harvard College Distributed, compressed Bloom filter Web cache server
US20050160148A1 (en) 2004-01-16 2005-07-21 Mailshell, Inc. System for determining degrees of similarity in email message information
US20050159136A1 (en) 2000-12-29 2005-07-21 Andrew Rouse System and method for providing wireless device access
US20050165895A1 (en) 2004-01-23 2005-07-28 International Business Machines Corporation Classification of electronic mail into multiple directories based upon their spam-like properties
US20050182735A1 (en) 2004-02-12 2005-08-18 Zager Robert P. Method and apparatus for implementing a micropayment system to control e-mail spam
US20050188023A1 (en) 2004-01-08 2005-08-25 International Business Machines Corporation Method and apparatus for filtering spam email
US20050198270A1 (en) 2004-02-20 2005-09-08 Thilo Rusche Dual use counters for routing loops and spam detection
US20050204006A1 (en) 2004-03-12 2005-09-15 Purcell Sean E. Message junk rating interface
US20050204159A1 (en) 2004-03-09 2005-09-15 International Business Machines Corporation System, method and computer program to block spam
US20050204005A1 (en) 2004-03-12 2005-09-15 Purcell Sean E. Selective treatment of messages based on junk rating
US20050228899A1 (en) 2004-02-26 2005-10-13 Brad Wendkos Systems and methods for producing, managing, delivering, retrieving, and/or tracking permission based communications
US6957259B1 (en) 2001-06-25 2005-10-18 Bellsouth Intellectual Property Corporation System and method for regulating emails by maintaining, updating and comparing the profile information for the email source to the target email statistics
US6971023B1 (en) 2000-10-03 2005-11-29 Mcafee, Inc. Authorizing an additional computer program module for use with a core computer program
US20060015942A1 (en) 2002-03-08 2006-01-19 Ciphertrust, Inc. Systems and methods for classification of messaging entities
US6990485B2 (en) 2002-08-02 2006-01-24 Hewlett-Packard Development Company, L.P. System and method for inducing a top-down hierarchical categorizer
US20060026246A1 (en) * 2004-07-08 2006-02-02 Fukuhara Keith T System and method for authorizing delivery of E-mail and reducing spam
US20060031464A1 (en) 2004-05-07 2006-02-09 Sandvine Incorporated System and method for detecting sources of abnormal computer network messages
US20060031303A1 (en) 1998-07-15 2006-02-09 Pang Stephen Y System for policing junk e-mail massages
US20060031306A1 (en) 2004-04-29 2006-02-09 International Business Machines Corporation Method and apparatus for scoring unsolicited e-mail
US20060036693A1 (en) 2004-08-12 2006-02-16 Microsoft Corporation Spam filtering with probabilistic secure hashes
US20060036701A1 (en) 2001-11-20 2006-02-16 Bulfer Andrew F Messaging system having message filtering and access control
US7003555B1 (en) 2000-06-23 2006-02-21 Cloudshield Technologies, Inc. Apparatus and method for domain name resolution
US20060047769A1 (en) 2004-08-26 2006-03-02 International Business Machines Corporation System, method and program to limit rate of transferring messages from suspected spammers
US20060059238A1 (en) 2004-05-29 2006-03-16 Slater Charles S Monitoring the flow of messages received at a server
US7032030B1 (en) 1999-03-11 2006-04-18 John David Codignotto Message publishing system and method
US7039949B2 (en) 2001-12-10 2006-05-02 Brian Ross Cartmell Method and system for blocking unwanted communications
US7051077B2 (en) 2003-06-30 2006-05-23 Mx Logic, Inc. Fuzzy logic voting method and system for classifying e-mail using inputs from multiple spam classifiers
US20060123083A1 (en) 2004-12-03 2006-06-08 Xerox Corporation Adaptive spam message detector
US20060137009A1 (en) 2004-12-22 2006-06-22 V-Secure Technologies, Inc. Stateful attack protection
US7072942B1 (en) 2000-02-04 2006-07-04 Microsoft Corporation Email filtering methods and systems
US20060168017A1 (en) 2004-11-30 2006-07-27 Microsoft Corporation Dynamic spam trap accounts
US7089241B1 (en) 2003-01-24 2006-08-08 America Online, Inc. Classifier tuning based on data similarities
US20060265498A1 (en) 2002-12-26 2006-11-23 Yehuda Turgeman Detection and prevention of spam
US7146402B2 (en) 2001-08-31 2006-12-05 Sendmail, Inc. E-mail system providing filtering methodology on a per-domain basis
US7155484B2 (en) 2003-06-30 2006-12-26 Bellsouth Intellectual Property Corporation Filtering email messages corresponding to undesirable geographical regions
US7155243B2 (en) 2004-06-15 2006-12-26 Tekelec Methods, systems, and computer program products for content-based screening of messaging service messages
US7188369B2 (en) 2002-10-03 2007-03-06 Trend Micro, Inc. System and method having an antivirus virtual scanning processor with plug-in functionalities
US7206814B2 (en) 2003-10-09 2007-04-17 Propel Software Corporation Method and system for categorizing and processing e-mails
US20070101423A1 (en) 2003-09-08 2007-05-03 Mailfrontier, Inc. Fraudulent message detection
US7219148B2 (en) 2003-03-03 2007-05-15 Microsoft Corporation Feedback loop for spam prevention
US20070118759A1 (en) 2005-10-07 2007-05-24 Sheppard Scott K Undesirable email determination
US20070130351A1 (en) 2005-06-02 2007-06-07 Secure Computing Corporation Aggregation of Reputation Data
US20070130350A1 (en) 2002-03-08 2007-06-07 Secure Computing Corporation Web Reputation Scoring
US20070133034A1 (en) 2005-12-14 2007-06-14 Google Inc. Detecting and rejecting annoying documents
US20070143407A1 (en) 2003-12-30 2007-06-21 First Information Systems, Llc E-mail certification service
US7249162B2 (en) 2003-02-25 2007-07-24 Microsoft Corporation Adaptive junk message filtering system
US7287060B1 (en) 2003-06-12 2007-10-23 Storage Technology Corporation System and method for rating unsolicited e-mail
US7293063B1 (en) 2003-06-04 2007-11-06 Symantec Corporation System utilizing updated spam signatures for performing secondary signature-based analysis of a held e-mail to improve spam email detection
US7321922B2 (en) 2000-08-24 2008-01-22 Yahoo! Inc. Automated solicited message detection
US20080104186A1 (en) 2003-05-29 2008-05-01 Mailfrontier, Inc. Automated Whitelist
US20080114843A1 (en) 2006-11-14 2008-05-15 Mcafee, Inc. Method and system for handling unwanted email messages
US20080120413A1 (en) 2006-11-16 2008-05-22 Comcast Cable Holdings, Lcc Process for abuse mitigation
US20090157708A1 (en) 2003-09-22 2009-06-18 Jean-Christophe Denis Bandini Delay technique in e-mail filtering system
US7574409B2 (en) 2004-11-04 2009-08-11 Vericept Corporation Method, apparatus, and system for clustering and classification
US7600255B1 (en) * 2004-04-14 2009-10-06 Cisco Technology, Inc. Preventing network denial of service attacks using an accumulated proof-of-work approach

Patent Citations (223)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5377354A (en) 1989-08-15 1994-12-27 Digital Equipment Corporation Method and system for sorting and prioritizing electronic mail messages
US5459717A (en) 1994-03-25 1995-10-17 Sprint International Communications Corporation Method and apparatus for routing messagers in an electronic messaging system
US5835087A (en) 1994-11-29 1998-11-10 Herz; Frederick S. M. System for generation of object profiles for a system for customized electronic identification of desirable objects
US5619648A (en) 1994-11-30 1997-04-08 Lucent Technologies Inc. Message filtering techniques
US5638487A (en) 1994-12-30 1997-06-10 Purespeech, Inc. Automatic speech recognition
JP2001505371A (en) 1995-05-08 2001-04-17 コンプサーブ、インコーポレーテッド Regulatory electronic message management device
US20020016956A1 (en) 1995-11-27 2002-02-07 Microsoft Corporation Method and system for identifying and obtaining computer software from a remote computer
US6327617B1 (en) 1995-11-27 2001-12-04 Microsoft Corporation Method and system for identifying and obtaining computer software from a remote computer
US6101531A (en) 1995-12-19 2000-08-08 Motorola, Inc. System for communicating user-selected criteria filter prepared at wireless client to communication server for filtering data transferred from host to said wireless client
US5704017A (en) 1996-02-16 1997-12-30 Microsoft Corporation Collaborative filtering utilizing a belief network
US5884033A (en) 1996-05-15 1999-03-16 Spyglass, Inc. Internet filtering system for filtering data transferred over the internet utilizing immediate and deferred filtering actions
US6453327B1 (en) 1996-06-10 2002-09-17 Sun Microsystems, Inc. Method and apparatus for identifying and discarding junk electronic mail
US6144934A (en) 1996-09-18 2000-11-07 Secure Computing Corporation Binary filter using pattern recognition
US6072942A (en) 1996-09-18 2000-06-06 Secure Computing Corporation System and method of electronic mail filtering using interconnected nodes
US6041321A (en) 1996-10-15 2000-03-21 Sgs-Thomson Microelectronics S.R.L. Electronic device for performing convolution operations
US5911776A (en) 1996-12-18 1999-06-15 Unisys Corporation Automatic format conversion system and publishing methodology for multi-user network
US5930471A (en) 1996-12-26 1999-07-27 At&T Corp Communications system and method of operation for electronic messaging using structured response objects and virtual mailboxes
US5905859A (en) 1997-01-09 1999-05-18 International Business Machines Corporation Managed network device security method and apparatus
US5805801A (en) 1997-01-09 1998-09-08 International Business Machines Corporation System and method for detecting and preventing security
US6122657A (en) 1997-02-04 2000-09-19 Networks Associates, Inc. Internet computer system with methods for dynamic filtering of hypertext tags and content
US6742047B1 (en) 1997-03-27 2004-05-25 Intel Corporation Method and apparatus for dynamically filtering network content
US6047242A (en) 1997-05-28 2000-04-04 Siemens Aktiengesellschaft Computer system for protecting software and a method for protecting software
US6199103B1 (en) 1997-06-24 2001-03-06 Omron Corporation Electronic mail determination method and system and storage medium
US7117358B2 (en) 1997-07-24 2006-10-03 Tumbleweed Communications Corp. Method and system for filtering communication
US20020199095A1 (en) 1997-07-24 2002-12-26 Jean-Christophe Bandini Method and system for filtering communication
US20050081059A1 (en) 1997-07-24 2005-04-14 Bandini Jean-Christophe Denis Method and system for e-mail filtering
US5999967A (en) 1997-08-17 1999-12-07 Sundsted; Todd Electronic mail filtering by electronic stamp
US6199102B1 (en) 1997-08-26 2001-03-06 Christopher Alan Cobb Method and system for filtering electronic messages
US6332164B1 (en) 1997-10-24 2001-12-18 At&T Corp. System for recipient control of E-mail message by sending complete version of message only with confirmation from recipient to receive message
US6041324A (en) 1997-11-17 2000-03-21 International Business Machines Corporation System and method for identifying valid portion of computer resource identifier
US6003027A (en) 1997-11-21 1999-12-14 International Business Machines Corporation System and method for determining confidence levels for the results of a categorization system
US6393465B2 (en) 1997-11-25 2002-05-21 Nixmail Corporation Junk electronic mail detector and eliminator
US20020016824A1 (en) 1997-11-25 2002-02-07 Robert G. Leeds Junk electronic mail detector and eliminator
US6351740B1 (en) 1997-12-01 2002-02-26 The Board Of Trustees Of The Leland Stanford Junior University Method and system for training dynamic nonlinear adaptive filters which have embedded memory
US6421709B1 (en) 1997-12-22 2002-07-16 Accepted Marketing, Inc. E-mail filter and method thereof
US6023723A (en) 1997-12-22 2000-02-08 Accepted Marketing, Inc. Method and system for filtering unwanted junk e-mail utilizing a plurality of filtering mechanisms
US6052709A (en) 1997-12-23 2000-04-18 Bright Light Technologies, Inc. Apparatus and method for controlling delivery of unsolicited electronic mail
US5999932A (en) 1998-01-13 1999-12-07 Bright Light Technologies, Inc. System and method for filtering unsolicited electronic mail messages using data matching and heuristic processing
US20010039575A1 (en) 1998-02-04 2001-11-08 Thomas Freund Apparatus and method for scheduling and dispatching queued client requests within a server in a client/server computer system
US6505250B2 (en) 1998-02-04 2003-01-07 International Business Machines Corporation Apparatus and method for scheduling and dispatching queued client requests within a server in a client/server computer system
US6484261B1 (en) 1998-02-17 2002-11-19 Cisco Technology, Inc. Graphical network security policy management
US6195698B1 (en) 1998-04-13 2001-02-27 Compaq Computer Corporation Method for selectively restricting access to computer systems
US20010046307A1 (en) 1998-04-30 2001-11-29 Hewlett-Packard Company Method and apparatus for digital watermarking of images
US6427141B1 (en) 1998-05-01 2002-07-30 Biowulf Technologies, Llc Enhancing knowledge discovery using multiple support vector machines
US6157921A (en) 1998-05-01 2000-12-05 Barnhill Technologies, Llc Enhancing knowledge discovery using support vector machines in a distributed network environment
US6128608A (en) 1998-05-01 2000-10-03 Barnhill Technologies, Llc Enhancing knowledge discovery using multiple support vector machines
US6314421B1 (en) 1998-05-12 2001-11-06 David M. Sharnoff Method and apparatus for indexing documents for message filtering
US6308273B1 (en) 1998-06-12 2001-10-23 Microsoft Corporation Method and system of security location discrimination
US6192360B1 (en) 1998-06-23 2001-02-20 Microsoft Corporation Methods and apparatus for classifying text and for building a text classifier
US6161130A (en) 1998-06-23 2000-12-12 Microsoft Corporation Technique which utilizes a probabilistic classifier to detect "junk" e-mail by automatically updating a training and re-training the classifier based on the updated training set
US20060031303A1 (en) 1998-07-15 2006-02-09 Pang Stephen Y System for policing junk e-mail massages
US6167434A (en) 1998-07-15 2000-12-26 Pang; Stephen Y. Computer code for removing junk e-mail messages
US20080016579A1 (en) 1998-07-15 2008-01-17 Pang Stephen Y System for policing junk e-mail messages
US6112227A (en) 1998-08-06 2000-08-29 Heiner; Jeffrey Nelson Filter-in method for reducing junk e-mail
US6192114B1 (en) 1998-09-02 2001-02-20 Cbt Flint Partners Method and apparatus for billing a fee to a party initiating an electronic mail communication when the party is not on an authorization list associated with the party to whom the communication is directed
US6434600B2 (en) 1998-09-15 2002-08-13 Microsoft Corporation Methods and systems for securely delivering electronic mail to hosts having dynamic IP addresses
US6324569B1 (en) 1998-09-23 2001-11-27 John W. L. Ogilvie Self-removing email verified or designated as such by a message distributor for the convenience of a recipient
US6732273B1 (en) 1998-10-21 2004-05-04 Lucent Technologies Inc. Priority and security coding system for electronic mail messages
US20020169954A1 (en) 1998-11-03 2002-11-14 Bandini Jean-Christophe Denis Method and system for e-mail message transmission
US6484197B1 (en) 1998-11-07 2002-11-19 International Business Machines Corporation Filtering incoming e-mail
US6249807B1 (en) 1998-11-17 2001-06-19 Kana Communications, Inc. Method and apparatus for performing enterprise email management
US6618747B1 (en) 1998-11-25 2003-09-09 Francis H. Flynn Electronic communication delivery confirmation and verification system
US6546416B1 (en) 1998-12-09 2003-04-08 Infoseek Corporation Method and system for selectively blocking delivery of bulk electronic mail
US20030167311A1 (en) 1998-12-09 2003-09-04 Kirsch Steven T. Method and system for selectively blocking delivery of electronic mail
US6915334B1 (en) 1998-12-18 2005-07-05 At&T Corp. System and method for counteracting message filtering
US6643686B1 (en) 1998-12-18 2003-11-04 At&T Corp. System and method for counteracting message filtering
US6615242B1 (en) 1998-12-28 2003-09-02 At&T Corp. Automatic uniform resource locator-based message filter
US6654787B1 (en) 1998-12-31 2003-11-25 Brightmail, Incorporated Method and apparatus for filtering e-mail
US6266692B1 (en) 1999-01-04 2001-07-24 International Business Machines Corporation Method for blocking all unwanted e-mail (SPAM) using a header-based password
US6330590B1 (en) 1999-01-05 2001-12-11 William D. Cotten Preventing delivery of unwanted bulk e-mail
US6424997B1 (en) 1999-01-27 2002-07-23 International Business Machines Corporation Machine learning based electronic messaging system
US20030149733A1 (en) 1999-01-29 2003-08-07 Digital Impact Method and system for remotely sensing the file formats processed by an e-mail client
US6477551B1 (en) 1999-02-16 2002-11-05 International Business Machines Corporation Interactive electronic messaging system
JP2002537727A (en) 1999-02-17 2002-11-05 アーゴウ インターラクティブ リミテッド Electronic mail proxy and filter device and method
US7032030B1 (en) 1999-03-11 2006-04-18 John David Codignotto Message publishing system and method
US6732149B1 (en) 1999-04-09 2004-05-04 International Business Machines Corporation System and method for hindering undesired transmission or receipt of electronic messages
US6449635B1 (en) 1999-04-21 2002-09-10 Mindarrow Systems, Inc. Electronic mail deployment system
US6370526B1 (en) 1999-05-18 2002-04-09 International Business Machines Corporation Self-adaptive method and system for providing a user-preferred ranking order of object sets
US6592627B1 (en) 1999-06-10 2003-07-15 International Business Machines Corporation System and method for organizing repositories of semi-structured documents such as email
US6546390B1 (en) 1999-06-11 2003-04-08 Abuzz Technologies, Inc. Method and apparatus for evaluating relevancy of messages to users
US6868498B1 (en) 1999-09-01 2005-03-15 Peter L. Katsikas System for eliminating unauthorized electronic mail
US6701350B1 (en) 1999-09-08 2004-03-02 Nortel Networks Limited System and method for web page filtering
US6704772B1 (en) 1999-09-20 2004-03-09 Microsoft Corporation Thread based email
US6321267B1 (en) 1999-11-23 2001-11-20 Escom Corporation Method and apparatus for filtering junk email
US6728690B1 (en) 1999-11-23 2004-04-27 Microsoft Corporation Classification system trainer employing maximum margin back-propagation with probabilistic outputs
US6633855B1 (en) 2000-01-06 2003-10-14 International Business Machines Corporation Method, system, and program for filtering content using neural networks
US6701440B1 (en) 2000-01-06 2004-03-02 Networks Associates Technology, Inc. Method and system for protecting a computer using a remote e-mail scanning device
US20040019650A1 (en) 2000-01-06 2004-01-29 Auvenshine John Jason Method, system, and program for filtering content using neural networks
US7072942B1 (en) 2000-02-04 2006-07-04 Microsoft Corporation Email filtering methods and systems
US20030191969A1 (en) 2000-02-08 2003-10-09 Katsikas Peter L. System for eliminating unauthorized electronic mail
US6691156B1 (en) 2000-03-10 2004-02-10 International Business Machines Corporation Method for restricting delivery of unsolicited E-mail
TW521213B (en) 2000-03-27 2003-02-21 Agc Technology Inc Portable electronics information transmission
US6684201B1 (en) 2000-03-31 2004-01-27 Microsoft Corporation Linguistic disambiguation system and method using string-based pattern training to learn to resolve ambiguity sites
US20010049745A1 (en) 2000-05-03 2001-12-06 Daniel Schoeffler Method of enabling transmission and reception of communication when current destination for recipient is unknown to sender
US6519580B1 (en) 2000-06-08 2003-02-11 International Business Machines Corporation Decision-tree-based symbolic rule induction system for text categorization
US20020091738A1 (en) 2000-06-12 2002-07-11 Rohrabaugh Gary B. Resolution independent vector display of internet content
US20040073617A1 (en) 2000-06-19 2004-04-15 Milliken Walter Clark Hash-based systems and methods for detecting and preventing transmission of unwanted e-mail
US20020059425A1 (en) 2000-06-22 2002-05-16 Microsoft Corporation Distributed computing services platform
US7003555B1 (en) 2000-06-23 2006-02-21 Cloudshield Technologies, Inc. Apparatus and method for domain name resolution
US6779021B1 (en) 2000-07-28 2004-08-17 International Business Machines Corporation Method and system for predicting and managing undesirable electronic mail
US7321922B2 (en) 2000-08-24 2008-01-22 Yahoo! Inc. Automated solicited message detection
US6842773B1 (en) 2000-08-24 2005-01-11 Yahoo ! Inc. Processing of textual electronic communication distributed in bulk
TW520483B (en) 2000-09-13 2003-02-11 He-Shin Liau Computer program management system
US6757830B1 (en) 2000-10-03 2004-06-29 Networks Associates Technology, Inc. Detecting unwanted properties in received email messages
US6971023B1 (en) 2000-10-03 2005-11-29 Mcafee, Inc. Authorizing an additional computer program module for use with a core computer program
US6748422B2 (en) 2000-10-19 2004-06-08 Ebay Inc. System and method to control sending of unsolicited communications relating to a plurality of listings in a network-based commerce facility
US20020073157A1 (en) 2000-12-08 2002-06-13 Newman Paula S. Method and apparatus for presenting e-mail threads as semi-connected text by removing redundant material
US6853749B2 (en) 2000-12-13 2005-02-08 Panasonic Communications Co. Ltd. Information communications apparatus
US6775704B1 (en) 2000-12-28 2004-08-10 Networks Associates Technology, Inc. System and method for preventing a spoofed remote procedure call denial of service attack in a networked computing environment
US20050159136A1 (en) 2000-12-29 2005-07-21 Andrew Rouse System and method for providing wireless device access
US20020129111A1 (en) 2001-01-15 2002-09-12 Cooper Gerald M. Filtering unsolicited email
US6901398B1 (en) 2001-02-12 2005-05-31 Microsoft Corporation System and method for constructing and personalizing a universal information classifier
US20020124025A1 (en) 2001-03-01 2002-09-05 International Business Machines Corporataion Scanning and outputting textual information in web page images
US20020184315A1 (en) 2001-03-16 2002-12-05 Earnest Jerry Brett Redundant email address detection and capture system
US6928465B2 (en) 2001-03-16 2005-08-09 Wells Fargo Bank, N.A. Redundant email address detection and capture system
US6751348B2 (en) 2001-03-29 2004-06-15 Fotonation Holdings, Llc Automated detection of pornographic images
US20020147782A1 (en) 2001-03-30 2002-10-10 Koninklijke Philips Electronics N.V. System for parental control in video programs based on multimedia content information
US6920477B2 (en) 2001-04-06 2005-07-19 President And Fellows Of Harvard College Distributed, compressed Bloom filter Web cache server
US20020174185A1 (en) 2001-05-01 2002-11-21 Jai Rawat Method and system of automating data capture from electronic correspondence
US20030037074A1 (en) 2001-05-01 2003-02-20 Ibm Corporation System and method for aggregating ranking results from various sources to improve the results of web searching
US6892193B2 (en) 2001-05-10 2005-05-10 International Business Machines Corporation Method and apparatus for inducing classifiers for multimedia based on unified representation of features reflecting disparate modalities
US6768991B2 (en) 2001-05-15 2004-07-27 Networks Associates Technology, Inc. Searching for sequences of character data
US20030041126A1 (en) 2001-05-15 2003-02-27 Buford John F. Parsing of nested internet electronic mail documents
US20030009698A1 (en) 2001-05-30 2003-01-09 Cascadezone, Inc. Spam avenger
US20040199594A1 (en) 2001-06-21 2004-10-07 Radatti Peter V. Apparatus, methods and articles of manufacture for intercepting, examining and controlling code, data and files and their transfer
US6957259B1 (en) 2001-06-25 2005-10-18 Bellsouth Intellectual Property Corporation System and method for regulating emails by maintaining, updating and comparing the profile information for the email source to the target email statistics
TW519591B (en) 2001-06-26 2003-02-01 Wistron Corp Virtual e-mail server system
US20040199585A1 (en) 2001-06-29 2004-10-07 Bing Wang Apparatus and method for handling electronic mail
US20030009495A1 (en) 2001-06-29 2003-01-09 Akli Adjaoute Systems and methods for filtering electronic content
US20030016872A1 (en) 2001-07-23 2003-01-23 Hung-Ming Sun Method of screening a group of images
US20030088627A1 (en) 2001-07-26 2003-05-08 Rothwell Anton C. Intelligent SPAM detection system using an updateable neural analysis engine
US7146402B2 (en) 2001-08-31 2006-12-05 Sendmail, Inc. E-mail system providing filtering methodology on a per-domain basis
US20060036701A1 (en) 2001-11-20 2006-02-16 Bulfer Andrew F Messaging system having message filtering and access control
US7039949B2 (en) 2001-12-10 2006-05-02 Brian Ross Cartmell Method and system for blocking unwanted communications
US20070130350A1 (en) 2002-03-08 2007-06-07 Secure Computing Corporation Web Reputation Scoring
US20060015942A1 (en) 2002-03-08 2006-01-19 Ciphertrust, Inc. Systems and methods for classification of messaging entities
US6785820B1 (en) 2002-04-02 2004-08-31 Networks Associates Technology, Inc. System, method and computer program product for conditionally updating a security program
US20030204569A1 (en) 2002-04-29 2003-10-30 Michael R. Andrews Method and apparatus for filtering e-mail infected with a previously unidentified computer virus
US20030229672A1 (en) 2002-06-05 2003-12-11 Kohn Daniel Mark Enforceable spam identification and reduction system, and method thereof
US20040003283A1 (en) * 2002-06-26 2004-01-01 Goodman Joshua Theodore Spam detector with challenges
US20040015554A1 (en) 2002-07-16 2004-01-22 Brian Wilson Active e-mail filter with challenge-response
US20040019651A1 (en) 2002-07-29 2004-01-29 Andaker Kristian L. M. Categorizing electronic messages based on collaborative feedback
US6990485B2 (en) 2002-08-02 2006-01-24 Hewlett-Packard Development Company, L.P. System and method for inducing a top-down hierarchical categorizer
US20050015456A1 (en) * 2002-08-30 2005-01-20 Martinson John Robert System and method for eliminating unsolicited junk or spam electronic mail
US20040054887A1 (en) 2002-09-12 2004-03-18 International Business Machines Corporation Method and system for selective email acceptance via encoded email identifiers
US20040059697A1 (en) 2002-09-24 2004-03-25 Forman George Henry Feature selection for two-class classification systems
US20040068543A1 (en) * 2002-10-03 2004-04-08 Ralph Seifert Method and apparatus for processing e-mail
US7188369B2 (en) 2002-10-03 2007-03-06 Trend Micro, Inc. System and method having an antivirus virtual scanning processor with plug-in functionalities
US20040083270A1 (en) 2002-10-23 2004-04-29 David Heckerman Method and system for identifying junk e-mail
US20040093371A1 (en) 2002-11-08 2004-05-13 Microsoft Corporation. Memory bound functions for spam deterrence and the like
WO2004054188A1 (en) 2002-12-10 2004-06-24 Mk Secure Solutions Ltd Electronic mail system
US6732157B1 (en) 2002-12-13 2004-05-04 Networks Associates Technology, Inc. Comprehensive anti-spam system, method, and computer program product for filtering unwanted e-mail messages
US20060265498A1 (en) 2002-12-26 2006-11-23 Yehuda Turgeman Detection and prevention of spam
US20040139165A1 (en) 2003-01-09 2004-07-15 Microsoft Corporation Framework to enable integration of anti-spam technologies
US20040139160A1 (en) 2003-01-09 2004-07-15 Microsoft Corporation Framework to enable integration of anti-spam technologies
US20040148330A1 (en) 2003-01-24 2004-07-29 Joshua Alspector Group based spam classification
US7089241B1 (en) 2003-01-24 2006-08-08 America Online, Inc. Classifier tuning based on data similarities
US7249162B2 (en) 2003-02-25 2007-07-24 Microsoft Corporation Adaptive junk message filtering system
US20040215977A1 (en) 2003-03-03 2004-10-28 Goodman Joshua T. Intelligent quarantining for spam prevention
US7219148B2 (en) 2003-03-03 2007-05-15 Microsoft Corporation Feedback loop for spam prevention
US20070208856A1 (en) 2003-03-03 2007-09-06 Microsoft Corporation Feedback loop for spam prevention
US20040177120A1 (en) 2003-03-07 2004-09-09 Kirsch Steven T. Method for filtering e-mail messages
US20040181571A1 (en) * 2003-03-12 2004-09-16 Atkinson Robert George Reducing unwanted and unsolicited electronic messages by preventing connection hijacking and domain spoofing
US20040205135A1 (en) 2003-03-25 2004-10-14 Hallam-Baker Phillip Martin Control and management of electronic messaging
US20040193684A1 (en) 2003-03-26 2004-09-30 Roy Ben-Yoseph Identifying and using identities deemed to be known to a user
US20040210640A1 (en) 2003-04-17 2004-10-21 Chadwick Michael Christopher Mail server probability spam filter
US7320020B2 (en) 2003-04-17 2008-01-15 The Go Daddy Group, Inc. Mail server probability spam filter
US20050108340A1 (en) 2003-05-15 2005-05-19 Matt Gleeson Method and apparatus for filtering email spam based on similarity measures
US20080104186A1 (en) 2003-05-29 2008-05-01 Mailfrontier, Inc. Automated Whitelist
US7293063B1 (en) 2003-06-04 2007-11-06 Symantec Corporation System utilizing updated spam signatures for performing secondary signature-based analysis of a held e-mail to improve spam email detection
US7287060B1 (en) 2003-06-12 2007-10-23 Storage Technology Corporation System and method for rating unsolicited e-mail
US20040255122A1 (en) 2003-06-12 2004-12-16 Aleksandr Ingerman Categorizing electronic messages based on trust between electronic messaging entities
US7263607B2 (en) 2003-06-12 2007-08-28 Microsoft Corporation Categorizing electronic messages based on trust between electronic messaging entities
US20050021649A1 (en) * 2003-06-20 2005-01-27 Goodman Joshua T. Prevention of outgoing spam
US7711779B2 (en) 2003-06-20 2010-05-04 Microsoft Corporation Prevention of outgoing spam
US20040260776A1 (en) 2003-06-23 2004-12-23 Starbuck Bryan T. Advanced spam detection techniques
US7051077B2 (en) 2003-06-30 2006-05-23 Mx Logic, Inc. Fuzzy logic voting method and system for classifying e-mail using inputs from multiple spam classifiers
US7155484B2 (en) 2003-06-30 2006-12-26 Bellsouth Intellectual Property Corporation Filtering email messages corresponding to undesirable geographical regions
US20050015455A1 (en) 2003-07-18 2005-01-20 Liu Gary G. SPAM processing system and methods including shared information among plural SPAM filters
US20050041789A1 (en) 2003-08-19 2005-02-24 Rodney Warren-Smith Method and apparatus for filtering electronic mail
US20050060643A1 (en) 2003-08-25 2005-03-17 Miavia, Inc. Document similarity detection and classification system
US20050050150A1 (en) 2003-08-29 2005-03-03 Sam Dinkin Filter, system and method for filtering an electronic mail message
US20070101423A1 (en) 2003-09-08 2007-05-03 Mailfrontier, Inc. Fraudulent message detection
US20090157708A1 (en) 2003-09-22 2009-06-18 Jean-Christophe Denis Bandini Delay technique in e-mail filtering system
US20050076084A1 (en) 2003-10-03 2005-04-07 Corvigo Dynamic message filtering
US20050091320A1 (en) 2003-10-09 2005-04-28 Kirsch Steven T. Method and system for categorizing and processing e-mails
US20050080855A1 (en) 2003-10-09 2005-04-14 Murray David J. Method for creating a whitelist for processing e-mails
US7366761B2 (en) 2003-10-09 2008-04-29 Abaca Technology Corporation Method for creating a whitelist for processing e-mails
US7206814B2 (en) 2003-10-09 2007-04-17 Propel Software Corporation Method and system for categorizing and processing e-mails
US20050080889A1 (en) 2003-10-14 2005-04-14 Malik Dale W. Child protection from harmful email
US20050091321A1 (en) 2003-10-14 2005-04-28 Daniell W. T. Identifying undesired email messages having attachments
US20050097174A1 (en) 2003-10-14 2005-05-05 Daniell W. T. Filtered email differentiation
US20050114452A1 (en) 2003-11-03 2005-05-26 Prakash Vipul V. Method and apparatus to block spam based on spam reports from a community of users
US20050102366A1 (en) 2003-11-07 2005-05-12 Kirsch Steven T. E-mail filter employing adaptive ruleset
US20050120019A1 (en) 2003-11-29 2005-06-02 International Business Machines Corporation Method and apparatus for the automatic identification of unsolicited e-mail messages (SPAM)
US20070143407A1 (en) 2003-12-30 2007-06-21 First Information Systems, Llc E-mail certification service
US20050188023A1 (en) 2004-01-08 2005-08-25 International Business Machines Corporation Method and apparatus for filtering spam email
US7359941B2 (en) 2004-01-08 2008-04-15 International Business Machines Corporation Method and apparatus for filtering spam email
US20050160148A1 (en) 2004-01-16 2005-07-21 Mailshell, Inc. System for determining degrees of similarity in email message information
US20050165895A1 (en) 2004-01-23 2005-07-28 International Business Machines Corporation Classification of electronic mail into multiple directories based upon their spam-like properties
US20050182735A1 (en) 2004-02-12 2005-08-18 Zager Robert P. Method and apparatus for implementing a micropayment system to control e-mail spam
US20050198270A1 (en) 2004-02-20 2005-09-08 Thilo Rusche Dual use counters for routing loops and spam detection
US20050228899A1 (en) 2004-02-26 2005-10-13 Brad Wendkos Systems and methods for producing, managing, delivering, retrieving, and/or tracking permission based communications
US20050204159A1 (en) 2004-03-09 2005-09-15 International Business Machines Corporation System, method and computer program to block spam
US20050204006A1 (en) 2004-03-12 2005-09-15 Purcell Sean E. Message junk rating interface
US20050204005A1 (en) 2004-03-12 2005-09-15 Purcell Sean E. Selective treatment of messages based on junk rating
US7600255B1 (en) * 2004-04-14 2009-10-06 Cisco Technology, Inc. Preventing network denial of service attacks using an accumulated proof-of-work approach
US20060031306A1 (en) 2004-04-29 2006-02-09 International Business Machines Corporation Method and apparatus for scoring unsolicited e-mail
US20060031464A1 (en) 2004-05-07 2006-02-09 Sandvine Incorporated System and method for detecting sources of abnormal computer network messages
US20060059238A1 (en) 2004-05-29 2006-03-16 Slater Charles S Monitoring the flow of messages received at a server
US7155243B2 (en) 2004-06-15 2006-12-26 Tekelec Methods, systems, and computer program products for content-based screening of messaging service messages
US20060026246A1 (en) * 2004-07-08 2006-02-02 Fukuhara Keith T System and method for authorizing delivery of E-mail and reducing spam
US20060036693A1 (en) 2004-08-12 2006-02-16 Microsoft Corporation Spam filtering with probabilistic secure hashes
US20060047769A1 (en) 2004-08-26 2006-03-02 International Business Machines Corporation System, method and program to limit rate of transferring messages from suspected spammers
US7574409B2 (en) 2004-11-04 2009-08-11 Vericept Corporation Method, apparatus, and system for clustering and classification
US20060168017A1 (en) 2004-11-30 2006-07-27 Microsoft Corporation Dynamic spam trap accounts
US20060123083A1 (en) 2004-12-03 2006-06-08 Xerox Corporation Adaptive spam message detector
US20060137009A1 (en) 2004-12-22 2006-06-22 V-Secure Technologies, Inc. Stateful attack protection
US20070130351A1 (en) 2005-06-02 2007-06-07 Secure Computing Corporation Aggregation of Reputation Data
US20070118759A1 (en) 2005-10-07 2007-05-24 Sheppard Scott K Undesirable email determination
US20070133034A1 (en) 2005-12-14 2007-06-14 Google Inc. Detecting and rejecting annoying documents
US20080114843A1 (en) 2006-11-14 2008-05-15 Mcafee, Inc. Method and system for handling unwanted email messages
US20080120413A1 (en) 2006-11-16 2008-05-22 Comcast Cable Holdings, Lcc Process for abuse mitigation

Non-Patent Citations (91)

* Cited by examiner, † Cited by third party
Title
"Camram Postage Stamp Basics", Internet Citation, Jun. 9, 2002, 17 pages.
"Clearswift Announces the Most Complete e-Policy-Based Email Content Security Product for Service Providers"; http://www.clearswift.com/news/item.aspx?ID=144. (Oct. 12, 2002).
"MIME", The Microsoft Computer Dictionary, 5th ed. Redmond, WA; Microsoft Press. May 1, 2002.
"Sender ID Framework Overview", retrieved from <<http://www.microsoft.com/mscorp.twc/privacy/spam/senderid/overview.mspx>> on Dec. 17, 2004, published Sep. 30, 2004.
"Sender ID Framework Overview", retrieved from > on Dec. 17, 2004, published Sep. 30, 2004.
"Sender ID: Two Protocols, Divided by a Common Syntax", retrieved from <<http://spf.pobox.com/senderid.html>> on Dec. 17, 2004, 2 pages.
"Sender ID: Two Protocols, Divided by a Common Syntax", retrieved from > on Dec. 17, 2004, 2 pages.
"SPF: Sender Policy Framework", retrieved from <<http://spf.pobox.com>> on Dec. 17, 2004, Copyright IC Group, Inc., 2004, 1 page.
"SPF: Sender Policy Framework", retrieved from > on Dec. 17, 2004, Copyright IC Group, Inc., 2004, 1 page.
"The Coordinated Spam Reduction Initiative", Microsoft Corporation, Feb. 13, 2004, pp. 1-54.
Allman, "Spam, Spam, Spam, Spam, Spam, the FTC, and Spam" Queue, Sep. 2003, pp. 62-69, vol. 1 Issue 6, ACM.
Androutsopoulos, et al., "An Experimental Cmparison of Naive Bayesain and Keywork-Based Anti-Spam Filtering with Personal E-mail Messges", Proceedings of the 23rd ACM SIGIR Conference, pp. 160-167, 2000.
Androutsopoulos, Learning to Filter Spam E-Mail: A Comparison of a Naive Bayesian and a Memory-Based Approach; 4th PKDD's Workshop on Machine Learning and Textual Information Access, 2000 13 pages.
Argamon, et al., "Routing documents according to style"; In First International Workshop on Innovative Information Systems, 1998. 8 pages.
Balter, et al., "Bifrost Inbox Organizer: Giving users control over the inbox"; NordiCHI Oct. 2, 2002, pp. 111-118, Arhus, Denmark.
Bowman, "Hotmail Spam Filters Block Outgoing E-mail"; CNET News.com, Jan. 18, 2001. 3 pages.
Breiman, et al., "Classification and Regression Trees"; Wadsworth & Brooks, Monterey, CA (1984).
Broder, et al., "Syntactic Clustering of the Web" SRC Technical note, Digital Corporation, Jul. 25, 1997. 13 pages.
Byrne, "My Spambook: Was Thwarting UCE Address Culling Programs"; Google, Newsgroups: news.admin.net-abuse.email. comp.mail.sendmail, comp.security.unix; Jan. 19, 1997, 2 pages.
Cohen, "Learning Rules that Classify E-Mail", In the proceedings of the 1996 AAA Spring Symposium on Machine Learning in information Access. Downloaded from William Cohen's web page: http:www.research.att.com/ncohen/pubs.html.
Cranor, et al., "Spam!" Communications of the ACM, 1998, pp. 74-83, vol. 41. No. 8.
Cunningham, et al., "A Case-Based Approach to Spam Filtering that Can Track Concept Drift" Trinity College, Dublin, Department of Computer Science, May 13, 2003.
Dwork, et al., "Pricing via Processing or Combatting Junk Mail"; Presented at Crypto 92; pp. 1-11.
European Search Report dated Apr. 6, 2006 and mailed Apr. 6, 2006 for EP 04102242, 3 pages.
European Search Report, dated Jun. 9, 2005, mailed Aug. 22, 2005 for European Patent Application Serial No. EP04011978, 12 pages.
European Search Report, EP31087TE900, mailed Nov. 11, 2004.
Fawcett, ""In vivo" Spam Filtering: A Challenge Problem for KDD"; SIGKDD Explorations, Dec. 2003. pp. 140-148, vol. 5 Issue 2, ACM.
Federal Trade Commission, "False Claims in Spam"; A report by the FTC's division of marketing practices, Apr. 30, 2003, http://www.ftc.gov/reports/spam/030429spamreport.pdf.
Gee, "Using Latent Semantic Indexing to Filter Spam" Dept. of Computer Science and Engineering, University of Texas-Arlington. 5 pages. Proceedings of the 2003 ACM symposium on Applied Computing. 2003 portal.acm.org.
Graham, "A Plan for Spam, Online" Aug. 2002, XP002273602, http://www.paulgraham.com/spam.html, retrieved on Mar. 12, 2004.
Graham, "The Future of Spam", Computer Security Journal, CSI Computer Security Institute, vol. XIX, No. 1, Jan. 2003, pp. 1-5.
Hansell, "Internet Is Losing Ground in Battle Against Spam"; the New York Times: Technology section, Apr. 22, 2003.
Hayes, "Spam, Spam, Spam, Lovely Spam"; American Scientist Online, Jun. 30, 2003. pp. 1-6. vol. 91.
Hidalgo, "Evaluating Cost-Sensitive Unsolicited Bulk Email Categorization"; SAC 2002, pp. 615-620, ACM Madrid, Spain.
How to Obscure Any URL, http:www.pc-help.org/obscure.htm, last viewed on Jan. 18, 2003, 10 pages.
International Search Report dated Jan. 17, 2006, mailed Jan. 31, 2006, for PCT Application Serial No. PCT/US04/05501, 2 pages.
International Search Report, EP 03 00 6814, mailed Feb. 13, 2004.
Joachims, "Text Categorization with Support Vector Machines: Learning with Many Relevant Features", LS-8 Report 23, Nov. 1997, 18 pages.
Joachims, "Transductive Inference for Text Classification using Support Vector Machines", In Proceedings of the 16th International Conference on Machine Learning, 1999. pp. 200-209. Sabn Francisico, USA.
K. Mock. An Experimental Framework for Email Categorization and Management. Proceedings of the 24th Annual International ACM SIGIR Conference, pp. 392-393, 2001.
Kawamata, et al., "Internet Use Limitation", Started by Company, Part II. Full Monitoring/Limiting Software, NIKKEI Computer, No. 469, Nikkei Business Publications, Inc, May 10, 1999, pp. 87-91.
Knowles, et al. "Stop, in the Name of Spam". Communications of the ACM, Nov. 1998, pp. 11-14, vol. 41 No. 11, ACM.
Kohavi, "A study of cross-validation and bootstrap accuracy estimation and model selection", Proceedings of the Fourteenth International Joint Conference on Artificial Intelligence 2 (12), retrieved from http://dli.iiit.ac.in/ijcai/IJCAI-95-VOL2/PDF/016/pdf, 1995, pp. 1137-1143.
Koller, et al., "Hierarchically classifying documents using very few words"; In ICML 97: Proceedings of the Fourteenth International Conference on Machine Learning; San Francisco, CA; Morgan Kaufmann 1997; 9 pages.
Koller, et al., "Toward Optimal Feature Selection" Machine Learning; Proc. of the Thirteenth International Conference, Morgan Kaufmann, 1996, 9 pages.
Lewis, "An Evaluation of Phrasal and Clustered Representations on a Text Categorization Task"; 15th Annual International SIGIR 92; Denmark 1992; pp. 37-50.
Lewis, "Representation and learning in information retrieval" University of Massachusetts, 1992.
Lewis, et al., "A Comparison of Two Learning Algorithms for Text Categorization", Third Annual Symposium on Document Analysis and Information Retrieval; Apr. 11-13, 1994; pp. 81-93.
Li, et al., "Classification of Text Documents", Department of Computer Science and Engineering, Michigan State University, E. Lansing, Michigan, The Computer Journal, vol. 41, No. 8, 1998; 537-546.
Li, et al., "Secure Human-Computer Identification against Peeping Attacks (SecHCI): A Survey"; Technical Report, Microsoft Research, 2003. 53 pages.
lwayama, et al., "Hierarchical Bayesian Clustering for Automatic Text Classification" Natureal Language; 1995; pp. 1322-1327.
Madigan, "Statistics and The War on Spam", Rutgers University, pp. 1-13, 2003.
Manco, et al., "Towards an Adaptive Mail Classifier"; In Proceedings of Italian Association for Artificial Intelligence Workshop, 2002. 12 pages.
Massey, et al., "Learning Spam: Simple Techniques for Freely-Available Software"; Proceedings of Freenix Track 2003 Usenix Annual Technical Conference, Online, Jun. 9, 2003, pp. 63-76, Berkley CA USA.
Mimoso, "Quick Takes: Imagine Analysis, Filtering Comes to E-mail Security", http://searchsecurity.techtarget.com/originalContent.html (Feb. 5, 2002).
Mitchell, "Machine Learning", Carnegie Mellon Universy, Bayesian Learning, Chapter 6, pp. 180-184, The McGraw-Hill Companies, Inc. cc 1997.
Mock, "An Experimental Framework for Email Categorization and Management" Proceedings of the 24th Annual International ACM SIGIR Conference, pp. 292-293. 2001.
OA dated Jan. 16, 2009 for U.S. Appl. No. 10/917,077, 34 pages.
OA dated Nov. 28, 2008 for U.S. Appl. No. 10/799,455, 53 pages.
OA dated Nov. 6, 2008 for U.S. Appl. No. 10/799,992, 46 pages.
OA dated Oct. 8, 2008 for U.S. Appl. No. 11/743,466, 43 pages.
O'Brien, et al., "Spam Filters: Bayes vs. Chi-squared; Letters vs. Words" Proceedings of the 1st international symposium on Information and communication technologies, 2003, pp. 291-296, Dublin, Ireland.
Palme, et al., "Issues when designing filters in messaging systems", Department of Computer and Systems Sciences, Stockholm University, Royal institute of Technology, Skeppargarten 73, S-115 30, Stockholm, Sweden, Computer Communications; 1996; 99. 95-101.
Pantel, et al., "Spam Cop: A Spam Classification & Organization Program"; In Proceedings AAAI-1998 Workshop on Learning for Text Categorization, 1998. 8 pages.
Partial European Search Report, EP05100847, mailed Jun. 21, 2005, 5 pages.
Quinlan, "C4.5: Programs for Machine Learning"; Morgan Kaufmann, San Francisco, CA (1993).
Rennie, "ifile: An Application of Machine Learning to E-Mail Filtering"; Proceedings of the KDD-2000 Workshop on Text Mining, Sixth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2000. 6 pages.
Rosen, "E-mail Classification in the Haystack Framework" Massachusetts Institute of Technology, Feb. 2003.
Sahami, "Learning Limited Dependence Bayesian Classifiers" in KDD-96: Proceedings of the Second International Conference on Knowledge Discovery and Data Mining, AAAI Presss, 1996, Menlo Park, CA, pp. 335-338.
Schutze, et al., "A Comparison of Classifiers and Document Representations for the Routing Problem", Proceedings of the 18th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, Seattle, WA Jul. 9-13, 1995; pp. 229-237.
Sebastiani, "Machine Learning in Automated Text Categorization"; ACM Computing Surveys, vol. 34 Issue 1, pp. 1-47, 2002.
Segal, et al., "SwiftFile: An Intelligent Assistant for Organizing E-Mail", IBM Thomas J. Watson Reseach Center. Copyright 2000, American Association for Artificial Intelligence (www.aaal.org.
Shaami, et al., "A Bayesian Approach to Filtering Junk E-Mail" Sanford University, http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.48.1254, 1998.
Shimmin, B.F., "Effective use of electronic post", FENIX Publishing House, Rostov-na-Donu, 1998, pp. 229-249.
Simard, et al., "Using Character Recognition and Segmentation to Tell Computer from Humans", International Conference on Document Analysis and Recognition (ICDAR), IEEE Computer Society. Los Alamitos, pp. 418-423, 2003.
Skoll, "How to Make Sure a Human is Sending You Mail", Google, Newsgroups: news.admin.net-abuse.usenet. Nov. 17, 1996.
Skoll, David, How to Make Sure a Human is Sending You Mail, Newsgroup Citation, Online, Nov. 17, 1997, XP002267504, news.admin.net-abuse.usenet, http://groups.google.ca/groups.
Spertus, "Smokey: Automatic Recognition of Hostile Messages" Proceedings of the Conference on Innovative Applications in Artificial Intelligence (IAAI), 1997, 8 pages.
Stop, in the Name of Spam, Communications of the ACM, Nov. 1998, pp. 11-14, vol. 41 No. 11, ACM.
Takkinen, et al., "CAFE: A Conceptual Model for Managing Information in Electronic Mail", Laboratory for Intelligent Infomation Systems, Department of Computer and Information Science, Linkoping University, Sweden, Conference on System Sciences, 1998 IEEE.
The Canadian Office Action mailed May 31, 2011 for Canadian Patent Application No. 2513967, a counterpart foreign application of US Patent No. 7,219,148.
Translated the Israel Office Action mailed Jan. 26, 2011 for Israeli Patent Application No. 206121, a counterpart foreign application of US Patent No. 7,558,832.
Turner, et al., "Controlling Spam through Lightweight Currency"; In Proceedings of the Hawaii International Conference on Computer Sciences, Jan. 2004. 9 pages.
Turner, et al., "Payment-Based Email"; 5th International Conference on Software Engineering, Artificial Intelligence, Networking, and Parallel/Distributed Computing, Jun. 2004. 7 pages.
White, "How Computers Work"; QUE Publishing, 2004, pp. 238-239.
Wong, "Preventing Spams and Relays" Linux Journal, Dec. 1998, 6 pages, vol. 1998 Issue 56es, Specialized Systems Consultants, Inc.
Wong, "SPF Overview"; Linux Journal, Apr. 2004, 6 pages, vol. 2004 Issue 120, Specialized Systems Consultants, Inc.
Written Opinion of the International Preliminary Examing Authority mailed Nov. 30, 2005 for PCT/US03/41526, 5 pages.
Wu, et al., "A new anti-Spam filter based on data mining and analysis of email security"; Conference Proceedings of the SPIE, Data Mining and Knowledge Discovery Theory, Tools and Technology V, vol. 5098, Apr. 21, 2003, pp. 147-154, Orlando FL USA.
Yang, et al., "A Comparative Study on Feature Selection in Text Categorization" School of Computer Science, Carnegie Melton University, Pittsburgh, PA and Verity, Inc., Sunnyvale, CA; http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.32.9956; 1997; 9 pages.
Yang, et al., "An Example-Based Mapping Method for Text Categorization and Retrieval"; ACM Transactions on Information Systems, vol. 12, No. 3, Jul. 1994, pp. 252-277.

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
USRE49334E1 (en) 2005-10-04 2022-12-13 Hoffberg Family Trust 2 Multifactorial optimization system and method
US8725597B2 (en) 2007-04-25 2014-05-13 Google Inc. Merchant scoring system and transactional database
US20080270209A1 (en) * 2007-04-25 2008-10-30 Michael Jon Mauseth Merchant scoring system and transactional database
US20110225076A1 (en) * 2010-03-09 2011-09-15 Google Inc. Method and system for detecting fraudulent internet merchants
US10013729B2 (en) * 2010-12-21 2018-07-03 Facebook, Inc. Categorizing social network objects based on user affiliations
US8738705B2 (en) * 2010-12-21 2014-05-27 Facebook, Inc. Categorizing social network objects based on user affiliations
US20140222821A1 (en) * 2010-12-21 2014-08-07 Facebook, Inc. Categorizing social network objects based on user affiliations
US9672284B2 (en) * 2010-12-21 2017-06-06 Facebook, Inc. Categorizing social network objects based on user affiliations
US20120158851A1 (en) * 2010-12-21 2012-06-21 Daniel Leon Kelmenson Categorizing Social Network Objects Based on User Affiliations
US8396935B1 (en) * 2012-04-10 2013-03-12 Google Inc. Discovering spam merchants using product feed similarity
US9811830B2 (en) 2013-07-03 2017-11-07 Google Inc. Method, medium, and system for online fraud prevention based on user physical location data
US10134041B2 (en) 2013-07-03 2018-11-20 Google Llc Method, medium, and system for online fraud prevention
US11308496B2 (en) 2013-07-03 2022-04-19 Google Llc Method, medium, and system for fraud prevention based on user activity data
US10621181B2 (en) * 2014-12-30 2020-04-14 Jpmorgan Chase Bank Usa, Na System and method for screening social media content

Also Published As

Publication number Publication date
US20070100949A1 (en) 2007-05-03

Similar Documents

Publication Publication Date Title
US8065370B2 (en) Proofs to filter spam
KR101255362B1 (en) Secure safe sender list
US11159523B2 (en) Rapid identification of message authentication
US9253199B2 (en) Verifying authenticity of a sender of an electronic message sent to a recipient using message salt
US7571319B2 (en) Validating inbound messages
US8582760B2 (en) Method and system of managing and filtering electronic messages using cryptographic techniques
US9894039B2 (en) Signed ephemeral email addresses
KR101109817B1 (en) Method and apparatus for reducing e-mail spam and virus distribution in a communications network by authenticating the origin of e-mail messages
KR101238527B1 (en) Reducing unwanted and unsolicited electronic messages
US7599993B1 (en) Secure safe sender list
Lawton E-mail authentication is here, but has it arrived yet?
US20220263822A1 (en) Rapid identification of message authentication
Wu et al. Blocking foxy phishing emails with historical information
US9118628B2 (en) Locked e-mail server with key server
US20230318844A1 (en) Enhancing Domain Keys Identified Mail (DKIM) Signatures
Schwenk Email: Protocols and SPAM
WO2005104422A1 (en) Electronic message authentication process

Legal Events

Date Code Title Description
AS Assignment

Owner name: MICROSOFT CORPORATION, WASHINGTON

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:HULTEN, GEOFFREY J;SESHADRINATHAN, GOPALAKRISHNAN;GOODMAN, JOSHUA T.;AND OTHERS;SIGNING DATES FROM 20051031 TO 20060405;REEL/FRAME:017454/0010

Owner name: MICROSOFT CORPORATION, WASHINGTON

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:HULTEN, GEOFFREY J;SESHADRINATHAN, GOPALAKRISHNAN;GOODMAN, JOSHUA T.;AND OTHERS;REEL/FRAME:017454/0010;SIGNING DATES FROM 20051031 TO 20060405

ZAAA Notice of allowance and fees due

Free format text: ORIGINAL CODE: NOA

ZAAB Notice of allowance mailed

Free format text: ORIGINAL CODE: MN/=.

STCF Information on status: patent grant

Free format text: PATENTED CASE

FEPP Fee payment procedure

Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

AS Assignment

Owner name: MICROSOFT TECHNOLOGY LICENSING, LLC, WASHINGTON

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:MICROSOFT CORPORATION;REEL/FRAME:034543/0001

Effective date: 20141014

FPAY Fee payment

Year of fee payment: 4

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 8TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1552); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 8

FEPP Fee payment procedure

Free format text: MAINTENANCE FEE REMINDER MAILED (ORIGINAL EVENT CODE: REM.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

LAPS Lapse for failure to pay maintenance fees

Free format text: PATENT EXPIRED FOR FAILURE TO PAY MAINTENANCE FEES (ORIGINAL EVENT CODE: EXP.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

STCH Information on status: patent discontinuation

Free format text: PATENT EXPIRED DUE TO NONPAYMENT OF MAINTENANCE FEES UNDER 37 CFR 1.362

FP Lapsed due to failure to pay maintenance fee

Effective date: 20231122