CA2828490A1 - System and method for identifying and ranking user preferences - Google Patents

System and method for identifying and ranking user preferences Download PDF

Info

Publication number
CA2828490A1
CA2828490A1 CA2828490A CA2828490A CA2828490A1 CA 2828490 A1 CA2828490 A1 CA 2828490A1 CA 2828490 A CA2828490 A CA 2828490A CA 2828490 A CA2828490 A CA 2828490A CA 2828490 A1 CA2828490 A1 CA 2828490A1
Authority
CA
Canada
Prior art keywords
preferences
ranking
mallows
preference information
preference
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
CA2828490A
Other languages
French (fr)
Inventor
Tian Lu
Craig Edgar BOUTILIER
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Publication of CA2828490A1 publication Critical patent/CA2828490A1/en
Abandoned legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9535Search customisation based on user profiles and personalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0201Market modelling; Market analysis; Collecting market data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N7/00Computing arrangements based on specific mathematical models
    • G06N7/01Probabilistic graphical models, e.g. probabilistic networks

Abstract

The present invention is a method and system for learning models of the preferences of members drawn from some population or group, utilizing arbitrary paired preferences of those members, in any commonly used ranking model. In particular the present invention involves techniques for learning Mallows models, and mixtures thereof, from pairwise preference data.

Description

SYSTEM AND METHOD FOR IDENTIFYING AND RANKING USER PREFERENCES
Field of Invention This invention relates in general systems and methods for identifying and ranking user preferences.
Background of the ,invention The prevalence of Internet commerce, social networking, and web search in recent years has produced a wealth of data about the preferences of individual users of such services.
Various solutions have been inspired by work in the fields of statistics and machine learning that provide automated mechanisms to find exploitable patterns in such data.
The exploitable patterns may be used to ultimately provide better recommendations of products, services, information, social connections, and other options or items to individuals (or groups of individuals). The increased quality of recommendations provided improves user experience, satisfaction, and uptake of these services.
With the abundance of preference data from search engines, review sites, etc., there is tremendous demand for learning detailed models of user preferences to support personalized recommendation, information retrieval, social choice, and other applications. Much work has focused on ordinal preference models and learning user or group "rankings" of items. Two classes of models are distinguishable. A first model may wish to learn an underlying objective (or "correct") ranking from noisy data or noisy expressions of user preferences (e.g., as in web search, where user selection suggests relevance). A second model may assume that users have different "types" with inherently distinct preferences, and aim to learn a population model that explains this diversity. Learning preference types (e.g., by segmenting or clustering the population) can be critical to effective personalization and preference elicitation: e.g., with a learned population preference distribution, choice data from a specific user allows inferences to be drawn about her preferences.
7651854.2
2 One aspect of research in this domain has focused on leveraging product ratings (typically given on a small, numerical scale), and users' profile data to predict the missing ratings or preferences of individual users (e.g., how much will user A
like a movie M that she has not yet seen). This is known as "collaborative filtering", because the prediction algorithms aggregate the collective, and usually partial, preferences of all users- These approaches take into the account the diversity of preferences across users.
See for example, papers "Probabilistic Matrix Factorization" by R.
Salakhutdinov and A.
Mnih, Neural Information Processing Systems 2008 and "Learning from incomplete data" by Z. Ghahramani and Michael I. Jordan, MIT Artificial Intelligence Memo No.
1509_ There are a variety of commercially relevant recommender systems based on collaborative filtering_ Considerable work in machine learning has exploited ranking models developed in the statistics and psychometrics literature, such as the Mallows model (Mallows, 1957), the Plackett-Luce model (Plackett, 1975; Luce, 1959), and others (IvIarden, 1995).
This work involves learning probability distributions over ranking preferences of a user population.
The models investigated in this line of research are usually derived from models proposed in the psychometric and statistics literature and include the Mallows model, the Plackett-Luce model, the Thurstonian model and several others (See: J.I, Marden, "Analyzing and Modeling Rank Data", Chapman and Hall, 1995.). The Mallows model has attracted particular attention in the machine learning community.
However, research to date provides methods for learning preference distributions using very restricted forms of evidence about individual user preferences, ranging from fill rankings, to top-ebottom-t items, to partitioned preferences (Lebanon & Mao, 2008).
Missing from this list are arbitrary pairwise comparisons of the form "a is preferred to b." Such pairwise preferences form the building blocks of almost all reasonable evidence about preferences, and subsumes the most general evidential models proposed in the literature. Furthermore, preferences in this form naturally arise in active elicitation of user preferences and choice contexts (e.g., web search, product comparison, advertisement clicks), where a user selects one alternative over others (Louviere et al., 2000).
3 While learning with pairwise preferences is clearly of great importance, most believe that this problem is impractically difficult: so, for instance, the Mallows model is often shunned in favour of more inference-friendly models (e.g., the Plackett-Luce model, which accommodates more general, but still restrictive, preferences (Cheng et al., 2010;
Guiver & Snelson, 2009)). To date, no methods been proposed for learning from arbitrary paired preferences in any commonly used ranking model.
Examples of relevant prior art include: Amazon. corn, which recommends consumer products based on past purchases, product details viewed and other relevant features; and Nettlix.corn, which recommends movies primary based on movie ratings on a predefined scale.
Another aspect that has been the subject of prior art research is finding an objective, or ground truth, and ranking of items based on expert relevance ratings or (noisy) user feedback in the form of comparisons on pairs of items. Algorithms for this problem have typically been applied in the domain of web search engines, where an objective ranking must be outputted for a given user search query. Some relevant papers on this subject are referenced below and in the paper Tyler Lu & Craig Boutilier, "Learning Mallows Models with Pairwise Preferences." Notably, such algorithms have been applied in large commercial search engines such as Googleml, and Microsoft Binirm.
Much of the prior art has focused on learning (i.e., inferring parameters) for such models or mixtures thereof (i.e., several Mallows distributions combined together, each forming a cluster) given very restrictive forms of preferences used as evidence/observations from which the model is to be learned. Existing prior art techniques require, for example, that observations of user preferences take the form of a full ranking, a partial ranking consisting of the top few items, and other such variations. Relevant prior art references include the following;
Buses, C. From ranknet to lambdarank to lambdamart: An overview. TR-2010-82, Microsoft Research, 2010.
4 Busse, L.M., Orbanz, P. and Buhrnann, J.M. Cluster analysis of heterogeneous rank data.
ICML, pp. 113420, 2007.
Cheng, W., Dembczynski, K., and Haller/mien Label ranking methods based on the Plackett-Luce model. ICML-10, pp. 215-222, Haifa, 2010.
Dokgnon, J., Pekec, A., and Regenwetter, M. The repeated insertion model for rankings:
Missing link between two subset choice models. Psychometrika, 69(1):33-54, 2004.
Dwork, C., Kumar, R., Naor, M,, and Sivakumar, D. Rank aggregation methods for the web. WWW-01, pp. 613-622, Hong Kong, 2001.
Guiver, I and Snelson, E. Bayesian inference for Plackett-Luce ranking models.

09, pp. 377-384, 2009.
Kamishima, T., Kazawa, H., and Akaho, S. Supervised ordering: an empirical survey.
IEEE Data Mining-05, pp. 673-676, 2005.
Lebanon, G. and Mao, y.. Non-parametric modeling of partially ranked data. J.
Machine Learning Research, 9:2401-2429, 2008.
Louviere, J., Hensher, D., and Swait, J. Stated Choice Methods: Analysis and Application. Cambridge, 2000.
Luce, R.D. Individual choice behavior: A theoretical analysis. Wiley, 1959.
Mallows, C.L. Non-null ranking models. Biometrika, 44:114-130, 1957.
Marden, J.1. Analyzing and modeling rank data. Chapman and Hall, 1991 Murphy, T.B. and Martin, D.. Mixtures of distance-based models for ranking data, Computational Statistics and Data Analysis, 41:645-655, 2003.
Neal, R. and Hinton, G. A view of the EM algorithm that justifies incremental, sparse, and other variants. In Jordan, M. (ed.), Learning in Graphical Models, pp. 355-368. MIT
Press, Cambridge, MA, 1999.
5 Plackett, R. The analysis of permutations. Applied Statistics, 24:193-202, 1975.
Young, P. Optimal voting rules. J. Economic Perspectives, 9:51-64, 1995.
Summary of the Invention In one aspect of the invention, a computer implemented method is provided for 5 identifying and ranking preferences regarding a plurality of options, for a group including two or more members, the method comprising: (a) obtaining from one or more data sources, preference information, including partial preference information, for the members, or a subset of the members, wherein the partial preference information may include a set of pairwise comparisons involving one or more of the options;
(b) analyzin.g, by operation of one or more server computers, the pairwise comparisons so as to learn one or more statistical models for inferring and ranking a set of preferences based on the partial preference information; and (c) applying the one or more statistical models so as to identify the set of preferences and rank the options.
In another aspect, the one or more statistical models are selected to fit with the partial preference information.
In yet another aspect, a plurality of mixtures of statistical models for inferring and ranking preferences is selected, thus enabling the formation of clusters consisting of probabilistic distributions applied to segments of the group.
In another aspect, the method includes a further step of automatically determining (i) a series of model parameters that best fit the available preference information for selecting one or more statistical models that best fits the preference information, and (ii) based on the model parameters, selecting one or more applicable statistical models.
In another aspect, the method enables prediction of unobserved preferences of specific members.
6 In a still other aspect, the one or more statistical models include a Mallows model for specifying a probability distribution over a ranking of the choices.
In yet another aspect, the Mallows model is specified by a mean ranking reflecting the average preferences of the group plus a dispersion parameter representing the variability of preferences in the group.
In a still other aspect, the preference information is obtained from one or more of: (a) user ratings or comparisons of products/services on an explicit basis; (b) user actions such as product selections, social media interactions or clicking on web links, past survey responses, on an implicit basis.
In yet another aspect, the model parameters are inferred given observations of user behavior, survey data, or implicit choice data, and the model parameters consists of inferred clusters of users, or preference types, based on partial preference data.
In a still other aspect, a computer network implemented system is provided for identifying and ranking preferences regarding a plurality of options, for a group including two or more members, the system comprising: (a) one or more server computers, connected to an interconnected network of computers, and linked to a server application;
(b) the server application includes or is linked to an preference inference engine that: (i) obtains from one or more data sources linked to the one or more server computers, preference information, including partial preference information, for the members, or a subset of the members, wherein the partial preference information may include a set of pairwise comparisons involving one or more of the options; (ii) analyzing the pairwise comparisons so as to learn one or more statistical models for inferring and ranking a set of preferences based on the partial preference information; and (iii) applying the one or more statistical models so as to identify the preferences and rank the set of options.
In yet another aspect, the one or more statistical models are selected to fit with the partial preference information.
7 In another aspect, the server application is operable to automatically determining (i) a series of model parameters that best fit the available preference information for selecting one or more statistical models that best fits the preference information, (ii) based on the model parameters selecting one or more applicable statistical models, (iii) the inference engine applying the selected one or more applicable statistical models so as to infer a preference set or preference ranking.
In yet another aspect of the invention, the inference engine is operable to predict unobserved preferences of specific members.
In yet another aspect, the one or more statistical models include a Mallows model for specifying a probability distribution over a ranking of the choices.
In another aspect, the Mallows model is specified by a mean ranking reflecting the average preferences of the group plus a dispersion parameter representing the variability of preferences in the group.
In a still other aspect of the method, a further step includes identifying the preferences enables the prediction of the preferences, and using the pairwise preferences and application of a Mallows model/mixture, In another aspect of the invention, the system is operable to enable the prediction of the preferences, by applying a Mallows model/mixture to the pairwise references, In this respect, before explaining at least one embodiment of the invention in detail, it is to be understood that the invention is not limited in its application to the details of construction and to the arrangements of the components set forth in the following description or illustrated in the drawings. The invention is capable of other embodiments and of being practiced and carried out in various ways. Also, it is to be understood that
8 the phraseology and terminology employed herein are for the purpose of description and should not be regarded as limiting.
Brief Deseriotion of the Drawings The invention will be better understood and objects of the invention will become apparent when consideration is given to the following detailed description thereof. Such description makes reference to the annexed drawings wherein:
FIG_ 1 shows an example of valid insertion ranks for item "e" given previously inserted items and constraints.
FIG. 2a is a table showing the learned clusters for sushi data.
FIG_ 2b shows a plot of the sushi average validation log likelihoods on various learned models.
FIG. 2c shows a plot of the Movielens log likelihoods on various learned models.
FIG. 3 shows an example of ranking distributions, in accordance with GRIM in a tabular format.
FIG_ 4 is a system diagram illustrating a representative implementation of the present invention.
FIG. 5 is a generic system diagram illustrating an implementation of the invention.
In the drawings, embodiments of the invention are illustrated by way of example. It is to be expressly understood that the description and drawings are only for the purpose of illustration and as an aid to understanding, and are not intended as a definition of the limits of the invention.
Detailed Deseriptionc.dthe Preferred Embodiment
9 One aspect of the invention is a novel and innovative way of learning probabilistic distributions over ranking preferences of a user population based on one or more ranking models. Various ranking models exist and these generally enable the determination of ranking choices within a population of interest (such as a defined user base or a target market), and also optionally the segmentation of the population of interest based on participation in different probabilistic distributions.
One example of a common ranking model is the so-called Mallows model although other ranking models exist such as for example the Plaokett-Luce model, the Thurstonian model and several others (see the book "Analyzing and Modeling Rank Data" by J. I.
Marden).
In one aspect of the invention, a novel and innovative method and system is provided for inferring the parameters of one or more statistical models for inferring and ranking preferences (such as known ranlcing models). ID another aspect, the method and system enables inference of a plurality of mixtures of statistical models for inferring and ranking preferences, thus enabling the formation of clusters consisting of probabilistic distributions applied to segments of the population.
Mixtures of Mallows models may consist of a variable number of Mallows models, each associated with a weight. Mixture models may be more flexible, allowing for representation of different segments of the user population whose preferences may be more distinct than permitted by a single model.
A population may also be referred to a "group", and the segments or sub-population may also be referred to as "sub-groups".
Existing techniques generally require that the preference information (which may also be understood as choice information) meet specific requirements such as (i) that there be a full ranking, (ii) a partial ranking consisting of the top few items, or (iii) other various as set out in the paper entitled "Learning Mallows Models with Pairwise Preference" for a thorough discussion (listed in the references).

=
Yet preference information that is most widely available, for example that can be obtained explicitly from user ratings or comparisons of products/services, or implicitly from user actions like product selection, social media interactions, "check in" using 5 location-aware services like FourSquareTM or FacebookTM, or simple clicks on links (e.g., of different search results, advertisements, or news items, etc.) generally does not meet these specific requirements. It should be understood that this information may be obtained from one or more data sources that are connected to a server computer that implements the operations of the present invention, for example as an Internet service.
This widely available information generally consists of partial preference information, which may be expressed as a set of pairwise comparisons over alternatives (items, = products, options, information, web/search results etc.). Each pairwise comparison may be represented for example in the form of "I prefer item A to item B". This is in contrast to the full ranking or partial ranking with a plurality of top ranked items, as referenced above.
While preference information in this form is widely available, referred to as "partial preference information", the use of such partial preference information for learning one or more statistical models for inferring and ranking preferences in a given population is not considered to be computationally feasible based on prior art solutions.
In a first aspect of the present invention, the inventors realized that one or more statistical models for inferring and ranking preferences may be learned using partial preference information if the partial preference information is expressed as pairwise comparisons.
In a second aspect of the invention, a plurality of particular methods are provided that enabling learning of one or more applicable statistical models for inferring and ranking preferences by automatically determining (i) a series of model parameters that best fit the available preference informationfor selecting one or more statistical models that best fits the available preference information, and (ii) based on the model parameters selecting one or more applicable statistical models. The statistical models may then be used to make predictions about the unobserved preferences of specific users, and to make decisions such as product recommendations based on the predictions.
In another aspect of the invention, a method is provided that utilizes a Mallows model (as a particular example of a common statistical model for inferring and ranking preferences) to specify a probability distribution over the ranking of the items. This ranking is operable to represent the preferences of a random user, with higher ranked items being more preferred. The Mallows model specifies the probability that a randomly selected user from a population of interest will have a specific preference ranking-More specifically, suppose a collection of items (e.g. products) given a population of interest (e.g., a target market). (liven a ranking r of these items, reflecting an ordering of preferences associated with the collection of items, the Mallows model describes the probability with which a random user has preferences represented by r. The Mallows model is specified by a mean ranking reflecting the "average preferences" of the population plus a dispersion parameter representing the variability of preferences in the population.
In another aspect of the invention, a plurality of different Mallows models may be utilized (or a "mixture" of Mallows models) that segment the population into sub-populations, each having a corresponding Mallows model. Each sub-population so generated consists of a cluster, segment, or group of users who have similar preferences.
Within each cluster, a group member's true preference ranking is a random perturbation of the average preference ranking of that group. Mixtures of Mallows models are generally more flexible, allowing one to represent different segments of the user population whose preferences are much more distinct than permitted by a single model.
The model parameters, in accordance with another aspect of the invention, are inferred given observations of user behavior, survey data, implicit choice data, etc.
The model parameters consist of inferred clusters of users, or preference types, based on partial preference data. Prior to present invention, no mechanism was known in order to generate these model parameters based on pairwise comparisons or arbitrary choice data.
In one aspect of the invention, a key operation for generating the model parameters consists of approximately sampling rankings from a Mallows distribution conditioned on observed choice data. This operation may be implemented using a sampling algorithm.
The sampling algorithm is used to generate a plurality of samples to optimize the model parameters, thereby maximizing the degree of statistical fit. Further details are provided below.
It should be unOerstood that the learning operations are unique and innovative, as are the sampling techniques used in conjunction with the learning operations.
In a particular implementation, the invention provides, in one aspect, one or more operations (which may be implemented using suitable algorithms) that relate to the generalized repeated insertion model (GRIM), for sampling from arbitrary ranking distributions. The present invention may also include operations (which may be implemented using suitable algorithms or other calculations) for evaluating log-likelihood, learning Mallows mixtures, and non-parametric estimation.
It should be understood that the techniques described may also be used for the purpose of generating an approximation to the true probability distribution in a variety of domains.
Two experiments conducted using these techniques are explained below, using real-life data sets that demonstrate that the techniques described herein may effectively and efficiently learn good population preference models. The first data set utilized in the experiments consists of the elicited preferences of five thousand people for ten varieties of sushi. The second data set consists of the movie preferences of six thousand users, each giving ratings of movies they liked/disliked. Application of the present invention utilizing these data sets may produce results that reveal interesting preference patterns, as discussed below.

In one aspect of the invention, methods and systems are provided that enable the use of partial preferences to revise a particular individual's (full) preference ranking distribution, which can be used for example for inferring preference, e.g in generating personalized recommendations.
many important special cases ¨ and have provable bounds with pairwise evidence.
It should be understood that the present invention may be used for offline operation as well, for example in connection with a computer system used for discovering preferences based on partial preference information. It should be understood that the present 15 below.
System Imnlementation The system of the present invention may consist of a computer network implemented inference service or engine that is configured to (i) utilize partial preference information, (ii) apply one or more operations for selecting one or more applicable statistical models The partial preference information may be obtained from a variety of entities (including human and computer entities such as software agents, input provided by users using an input means (for example a web page) or online or offline databases, opinion or review Preferences may be elicited in a number of ways, by one or more queries posed to a member of a group about their pairwise preferences.

In an aspect of the invention, the system may be implemented by one or more server computers, connected to an interconnected network of computers, and linked to a server application. The system may also be implemented as part of a cloud service that is part of a cloud computing network.
The computer program of the present invention, in one aspect thereof, may be implemented as a server application, whether linked to one or more server computers or to the cloud service. The computer program may also be linked to or integrated with various other platforms or services that may benefit from the preference inference operations of the present invention. One example of an implementation of the present invention is shown in Fig. 4.
The present invention may be understood as providing an online inference service for inferring preferences for a population, and for dynamically generating preference clusters that may define sub-populations that are associated with the generated preference clusters. The online service may be configured in a variety of ways known to those skilled in the art, so as to embody the operations described herein.
In one implementation of the invention the computer system may be implemented for example, as shown in Fig. 4. The computer system may include for example a server computer (50), but may include one or more linked server computers, a distributed computer architecture, or a cloud computing environment. The server computer (50) is linked to a server application (52). The server application (52) includes functionality for enabling the operations described herein. In one aspect of the invention, the server application (52) includes or is linked to an inference engine (54) that is operable to implement one or more operations that implement the learning procedure herein.
The server application (52) may include a web presentment utility (56) that is operable to present one or more web pages that include a user interface for example for providing access to the output from the inference engine (54).
Also, as previously described preference information may be obtained from a variety of data sources, including for example online or offline databases, product review websites, social networking websites, location-aware applications or services, applications (including mobile applications) and so on.
The server application (52), in one implementation, may also include for example one or more filters for detecting if preference information may not be expressed as pairwise preference. Additionally, the server application (52) may include programming that is 5 operable to extrapolate pairwise preferences from preference information provided in other forms. Additional the server application (52) may include logic for generating all of the pairwise preferences that may be implied from particular preference input information.
A skilled reader will recognize that there are a variety of implementations of the present
10 invention. The following provides an example of one implementation of the present invention, although other implementations are possible.
The server computer (50) is connected to an interconnected network of computers such as the Internet. Various computer systems may connect to the server computer (50) to obtain services described.
15 The following describe different implementations of the present invention, which may consist for example of Internet services that may be provided by the server computer (50).
Revising Ranking Preferences Obtaining a full ranking preference, and then revising this ranking preference based on partial preference information, based on a revision of the probabilities. The revised probabilities have more certainty in the modeling of that individual's full preference, especially as more preference data is revealed by, or obtained about, the same individual.
With the revised probabilities, statistical inference tasks may now be performed: this includes making customized product and item recommendations, placing the user in a particular market segment for purposes of advertising or marketing, or adapting survey questions designed for promotional, political and other decision-making piuposes.

Collaborative Filtering Services Providing collaborative filtering services. Prior art in collaborative filtering may involve a setting where users' numerical preference ratings are given. A collaborative filtering service based on present invention requires only pairwise comparisons. This may be less Objective Ranking computing an objective ranking. This may be as simple as assuming there is only one component in the Mallows mixture model and that user preferences are noisy deviations from the central, objective ranking.
learning Mallows models. The present invention may allow learning and inference with arbitrary choice data, which may be building blocks of a wide range of practical preference structures, and may be much broader than the restrictive preferences used in the prior art.

The utility of learning probabilistic models of user preferences may be significant. Such models may be used, and may be necessary, for a variety of purposes. Some of these uses may include:
Product selection/design: A vendor choosing which product or products to offer its target market may want to select those with the greatest potential for sales. Optimal choice of a product (either explicitly designed, or selected from a set of options made available by suppliers) may depend on the distribution of preferences of members of its target market/population. A probabilistic model of consumer preferences may be a necessary input to any sound method or algorithm (or other calculation) for optimal selection of product offerings. Different products may be selected or designed for sales/marketing to each sub-group. A possible example may be displaying a slate of new release movies on a movie rental website such that there is at least one movie from the slate which appeals to any given subgroup.
Market segmentation: A model of population preferences may also be used to optimally segment the target market into sub-groups where members of a sub-group have similar preferences, but members of distinct subgroups have much more divergent preferences.
Consumer ureference identification! The observed behavior or revealed preferences of a specific consumer (e.g., through a survey) may often be minimal. For instance, information about a consumer may consist of a small number of choices (perhaps even a single choice) of a product from a set of alternatives (e.g., the consumer chose to buy book X when presented with a set of options which included X). Using probabilistic inference techniques, this single piece of information may be used to probabilistically estimate that consumer's preferences for a variety of other products by conditioning on the observed choice. Specifically, by using the population preference model combined with the observed choice, it may be possible to provide a much more precise specification of that consumer's preferences than is offered by the population model alone.
This form of inference may be used to place the consumer in a particular market segment (see above) with more precision and greater confidence, and may form the basis of tailored marketing strategies, personalized product recommendation, etc. This form of inference may also be used to support the dynamic aggregation of individuals based on their inferred preferences for the purposes of demand-driven group discounting. When (sub) groups of individuals with similar preferences are automatically identified and targeted in this way, more customization of offers to sub-groups may take place, leading to greater yield (acceptance of offers) with less intrusion (fewer unwelcome offers).
Survey, active learning and preference elicitation methods: Application services may be designed to make recommendations (e.g., products, services, or information sources) after asking a user a number of queries related to their preferences. A model of population preferences can be used directly to optimize the query selection strategy in an adaptive fashion so as to minimize the expected (average) number of queries a user must face to determine good or optimal recommendations. This may lead to less cognitive burden on the part of users and hence raCire satisfaction with a recommendation system that incorporates or links to the inference engine of the present invention.
Such technology may also be applied to group decision making, where preferences of group members may be diverse and a consensus choice must be made. In this case preference queries should be carefully chosen so as to ask the right individual the right preference query, in such a way as to recommend the best consensus choice requiring minimal interaction with users. In such applications the technology used to generate intelligent queries may exploit the structure in the preference distribution of individuals, so that only relevant queries are asked. For example, if the input preference distribution indicates that a particular choice is generally dis-preferred, intelligent queries do not need to focus on that choice and instead focus elicitation efforts on more popular choices. The present invention may include the generalized repeated insertion model (GRIM), a method for sampling from arbitrary ranking distributions including conditional Mallows that generalizes the repeated insertion method for unconditional sampling of Mallows models (D o ignon et al., 2004).
For example, the present invention may utilize such a sampler as the core of a Monte Carlo EM algorithm to learn Mallows mixtures, as well as to evaluate log likelihood. It may be possible to extend the non-parametric framework of Lebanon & Mao (2008) to handle unrestricted ordinal preference data. Experiments have shown that the algorithms and other calculations of the present invention may effectively learn Mallows mixtures, with very reasonable running time, on datasets (e.g., Movielens) with hundreds of items and thousands of users.
Targeted advertising: The present invention may be used for targeted advertising purposes. Using preference data (of a very general kind: i.e. a set of pairwise comparisons) of users obtained from their browser cookies, account information, etc.
The present invention is operable to a statistical model that reveals the clustering stnicture of the users wherein in each cluster (i.e. group) of users their preferences are similar to one another. This allows advertisers and marketers to tailor their messages to the interests of groups of similar users. Furthermore, the groups of similar users reveal preference patterns that help businesses to design their products in order to target different user groups. The system of the present invention may be operable for example to generate in real time or near real time clusters of users and associated inferred preferences, in support of for example an ad network.
Crowdsourcing: crowdsourcing applications are applied in a variety of context including for gathering information, opinion, judgments from users, including to arrive at a more accurate answer or decision. One prominent use case involves inferring the correct option (or choice, or object, etc.) (or a ranking of such options) from a multitude of options. For example, consider a website that wants to categorize restaurants, for a given restaurant it can present users with a pair of categories such as "Korean" and "Japanese"
and have the user choose the more appropriate categorization (e.g. Korean), this allows the website to collect, for each user, a set of pairwise comparisons about the most plausible categorization of a particular restaurant. To aggregate such pairwise comparisons from users and present a ranked list in order of plausible categorization, our algorithms can be used to make such an inference.
Representative Implementation of Operations A skilled reader will recognize that a variety of implementations of the present invention are possible. What follows is a detailed explanation of algorithms for enabling the implementation of the operations described above. A variety of Definitions and Theorems may be applied to the embodiments of the present invention. The Definitions 5 and Theorems discussed herein are merely representations of examples of possible Definitions and Theorems to be applied by embodiments of the present invention.
Preliminaries It may be assumed that there is a set of items A = {al,....,am} and n agents, or users, N =
(1,...,n). Each agent à may have preferences over the set of items represented by a total 10 ordering or ranking -<.e over A. It may be possible to write x - y to mean à prefers x to y. Rankings may be represented as permutations of A. For any positive integer b, let [b]
{1, ...,b}. A bijection 8 : A --> [m] represents a ranking by mapping each item into its rank. Thus, for i dinlo--1(i) is the item with rank i. It may be possible to write ci = to indicate a ranking with i-th ranked item cr, and for the induced 15 preference relation. For any X c A, let crlx denotes the restriction of o- to items in X
And 1[] is the indicator function.
Generally, access to the complete preferences of agents may not be possible, but only partial information about their rankings may be possible (e.g., based on choice behaviour, query responses, etc.). It may be assumed that this data has a very general form: for each 20 agent à there may be a set of revealed pairwise preference comparisons over A, or simply preferences! { =>Te - ty14}
. It is possible to write tc(vi) to denote the transitive closure of yr Since preferences may be strict, tc(v) may be a strict partial order on A. It may be assumed that each tc( v) may be consistent, in which case tc( v, ) will contain no cycles. (It may be possible to apply concepts of the present invention to models where revealed preferences are noisy.) Preferences ye may be complete if tc( is a total order on A. It may be possible to write 11(v) to denote the linear extensions of v, i.e., the set of rankings consistent with v; it may be possible to write to denote r/ =

9( ). 0 is the set of all m! complete preferences. A collection V .(uõ...,13) is a (partial) preference profile: this may comprise observed data of the present invention.
Given cr = crjo-2.,.o-õ, and preference v, it may be possible to define:
d(v,u) E 1[oi cri E tc(v)]. (1) This measures dissimilarity between a preference set and a ranking using a number of pairwise disagreements (i.e., those pairs in v that are rnisordered relative to o-). If v is a complete ranking od,, then d(03 (7)is the classic Kendall-tau metric on rankings.
Arbitrary sets v of paired comparisons model a wide range of realistic revealed preferences. Full rankings (Murphy & Martin, 2003) may require m - 1 paired comparisons (a>- b c...) : top-t preferences (Busse et al., 2007) may need m -1 pairs (t -1 pairs to order the top t items, m - t pairs to set the tth item above the remaining m - t):
rankings of subsets X c A (Guiver & Snelson, 2009; Cheng et al., 2010) may be also representable. It may also be possible to consider the following rich class:
Definition 1 (Lebanon & Mao 2008). A preference set v is a partitioned preference if A
can be partitioned into subsets Ai,. 44 S. t.: (a) for all i <j q, if x e Ai and y EAi then x >-, y and (b) for each i q, items in Ai are incomparable under rev.
Partitioned preferences are general, subsuming the special cases above.
However, they may not represent many naturally occurring revealed preferences, including something as simple as a single paired comparison: v = ta b).
There are many distributional models of rankings - Marden (1995) provides a good overview. The two most popular in the learning community are the Mallows (1957) model and the Plackett-Luce model (Plackett, 1975: Luce, 1959). The present invention focuses on Mallows models, though embodiments of the present invention may extend to include other models. The Mallows 0 model may be parameterized by a modal or reference ranking o- and a dispersion parameter (i) e (0, 11. It may be possible to let r refer to an arbitrary ranking, then the Mallows model specifies:
P(r) P( r /7. = -7 (2 where Z ' '"`") The normalization constant may satisfy:
Z = 1- (1 + 0) = (1 + ç + ) = - (1 4- = + 6'1). (3) When 4> = 1 it may be possible to obtain the uniform distribution over the space n of rankings, and as 4> 4 0 it may be possible to approach the distribution that concentrates Diõ,10., A) all mass on cr _ Sometimes the model may be written as where = 0. To overcome the unimodal nature of Mallows models, mixture models have been proposed. A mixture with K components may require reference rankings = (crõ...,crK) dispersion parameters 4>. and mixing coefficients 71 = (7[1,...,71K). EM methods for such mixtures (limited to complete or top-k data) have been studied (Murphy & Martin, 2003: Russe et al., 2007).
The repeated insertion model (RIM), introduced by Doignon et al. (2004), is a generative process that gives rise to a family of distributions over rankings and provides a practical way to sample rankings from a Mallows model. Assume a reference ranking = and insertion probabilities pii for each i <m, j < i. RIM generates a new output ranking using the following process, proceeding in m steps. At Step 1, crt is added to the output ranking. At Step 2, /72 is inserted above with probability p2.1 and inserted below with probability p2,2 - 1 - More generally, at the ith step, the output ranking will be an ordering of ci,. csi_i and ot will be inserted at rank j <I with probability pt,. Critically, the insertion probabilities are independent of the ordering of the previously inserted items.
It may be possible to sample from a Mallows distribution using RIM with appropriate insertion probabilities.

Definition 2. Let a = az.. am be a reference ranking. Let I = 1, ..õ jõ) If V i Sin) be the set of insertion vectors. A repeated insertion function : maps an insertion 4) vector 6j, ...jõd into a ranking a 01,...jõ) by placing each cr in turn, into rank ]1, for all i5 m.
The definition may be best illustrated with an example. As an example, consider insertion vector (1,1,2,3) and a abed. Then (pc (1,1,2,3) = bcda because: a may be first inserted into rank!: and b may then be inserted into rank 1, shifting a down to get partial ranking ba; c may then be inserted into rank 2, leaving b but moving a down, giving bca:
d may be inserted at rank 3, giving bcda. Given referenceranking cr, there is a one-to-one correspondence between rankings and insertion vectors. Hence, sampling by RIM
may be described as: draw an insertion vector E I at random, where each ji < i is drawn independently with probability pu, - note that p1 =1 for all i- and return ranking Theorem 3 (Doignon et al. 2004). By setting pu, =
/(1+q+...') fon,'Si S in, RIM
induces the same distribution over rankings as the Mallows model.
Thus RIM may offer a simple, useful way to sample rankings from the Mallows distribution. (RIM may also be used to sample from variants of the, Mallows model, e.g., those using weighted Kendall-tau distance.) Generalized Repeated Insertion While RIM may provide a powerful tool for sampling from Mallows models (and by extension, Mallows mixtures), it samples unconditionally, without (direct) conditioning on evidence. The present invention may generalize RIM by permitting conditioning at each insertion step. The present invention may utilize the generalized repeated insertion model (GRIM) to sample from arbitrary rank distributions.
Sampling from Arbitrary Distributions Rather than focus on conditional Mallows distribution given evidence about agent preferences, the present invention may apply GRIM abstractly as a means of sampling from any distribution over rankings. The chain rule allows the present invention to represent any distribution over rankings in a concise way, as long as dependencies in the insertion probabilities are admitted: specifically, the insertion probabilities for any item al in the reference ranking may be conditioned on the ordering of the previously inserted items ai,..., Let Q denote distribution over rankings and a an (arbitrary) reference ranking, It may be possible to (uniquely) represent any ranking r e n using a and an insertion vector Jr = (j,r j) where r = (f ) . Thus Q may be represented by a distribution Q' over I: Q'(Jr) = Q(r). Similarly, fork < m, any partial ranking r[k] = (r1, rk) of the items {al,. = atc}, may be represented by a partial insertion vector j[k] = (A
J)= Letting %I[k]) = E{Q(r) r2 rk}, and Q'(j[k]) = E{QV) = j[1(]), results in V(.l[k]) = Q(r[k]). Conditional insertion, probabilities may be defined as:
PO lk-1) = Ql(ii 11). (4) This denotes the probability with which the ith item cri in the reference ranking may be inserted at position/ i, conditioned on the specific insertions of all previous items. By the chain rule, it may be possible to define:
Q1(i) = Ent ¨ 1DQ1(.ivrt-1Iitni ¨ 2D = = = Q/(j[11), If' RIM is run in the present invention with the conditional insertion probabilities ptiu[i_1]
defined above, it draws random insertion vectors j by sampling ji through jn, in turn, but each conditioned on the previously sampled components. The chain rule ensures that the resulting insertion vector is sampled from the distribution T. Hence the induced distribution over rankings r = 0,(j) is Q. This procedure is referred to as the generalized repeated insertion model (GRIM).

Theorem 4. Let Q be a ranking distribution and (7 a reference ranking. For any 7 E n , with insertion vector Pr (I- = (jr) , GRIM, using the insertion probabilities in Eq. 4, generates insertion vector r- with probability Q(P) (2(r) =
As shown in FIG. 3, it may be possible to illustrate GRIM using a simple example. In 5 particular, FIG. 3 shows an example of trace or run of the GRIM algorithm and the probability of any ranking that it may produce at each step of the process in a tabular format. The table 30 shows a sampling from a (conditional Mallows model on A = b,a, with dispersion sb , given evidence v [a The resulting ranking distribution Q may be given by the product of the conditional (1(bc) + ; ac) +
10 insertion probabilities: a = 1 _____ Q(abc) = +
0> ; and Q(b = = As required, VT) = 0 iff r is inconsistent with evidence v.
Sampling a Mallows Posterior While GRIM may allow sampling from arbitrary distributions over rankings, as presented above it may be viewed as largely a theoretical device, since it requires inference to 15 compute the required conditional probabilities. To sample from a Mallows posterior, given arbitrary pairwise comparisons v, the present invention may compute the required tern-is in a tractable fashion, The Mallows posterior may be given by:
ocf(o = = _____ , lir E Q(111.
(5) which may require summing over an intractable number of rankings to compute the 20 normalization constant.
One embodiment of the present invention may use RIM for rejection sampling;
sample unconditional insertion ranks, and reject a Tanking at any stage if it is inconsistent with v.
However, this may be impractical because of the high probability of rejection.
Instead it may be possible to use GRIM. The main obstacle is computing the insertion probability of a specific item given the insertion positions of previous items in Eq. 4 when Q' (more precisely, the corresponding Q) may be the Mallows Posterior. Indeed, this is 4P-hard even with a unifomi distribution over rm.
Proposition 5. Given v, a reference ordering , a partial ranking 71 over fat 0-;.-1) and i G computing the probability of inserting 01 at rank j w.r.t. the uniform Mallows posterior P (i.e., computing PO cc ltr en(v)i) is #P-hard.
This suggests it may be hard to sample exactly, and that computing the normalization constant in a Mallows posterior may be difficult. Nevertheless the present invention may include an approximate sampler AMP that is efficient to compute. While it can perform poorly in the worst-case, it may be possible that, empirically, it may produce posterior approximations. (It may be possible to provide theoretical bounds on approximation quality.) AMP may use the same intuitions as illustrated in Example 1, where the (unconditional) insertion probabilities may be used by RIM, but subject to constraints imposed by v. At each step, the item being inserted may only be placed in positions that do not contradict tc(v). It may be possible to show that the valid insertion positions for any item, given v, form a contiguous "region" of the ranking (as shown in FIG. 1, wherein valid insertion ranks for item e 10 may be = {2,3}
given previously inserted items and constraints v).
Proposition 6. Let insertion of 51 give a ranking n-i consistent with tc(v).
Let -tc(v) 61) and ti' J. Then inserting ux at rank j is consistent with tc(v) area jiff E ilt, I + h ¨ hi} , where f1 imgmaxL, 4-I ofhoutee (6) H, =0 ruin H, otherwise (7) Prop. 6 may immediately suggest a modification of the GRIM algorithm, AMP, for approximate sampling of the Mallows posterior; First initialize ranking r with GI at rank 1. Then for i 2...m, compute /3. , ht and insert c7, at rank hi) with probability proportional AMP may induce a sampling distribution P, that does not match the posterior P,,, exactly:
indeed the KL-divergence between the two may be severe, as the following example shows. Let A = tat, ==== am) and vaz aa all'";.
Let P the uniform Mallows prior = 1) with = al am. There are m raulcings in WO, one ri for each placement of ai . The true Mallows posterior P, may be uniform over Q(D) . But AMP may induce an approximation with 15, (r4i) = 2T(¨) for 5 ni ¨ 1 and 15, (rim) = 21. (-01 ¨1) .
T71- ( 2 + 1 ¨ log2772 ¨ (1+ .
The KL-divergence of .Põ and P,, is 2 TR 711 While AMP may perform poorly in the worst-case it may do well in practice. It may be possible to prove interesting properties, and provide theoretical guarantees of exact sampling in important special cases. First, AMP may always produce a ranking (insertion positions may always exist given any consistent v). Furthermore;
Proposition 7. The support of distribution Pv, induced by AMP is 0(0 (i.e., that of the Mallows posterior, Eq. 5).
Proposition 8. For any r E Mis), AMP outputs r with probability;
ifq:1,71 = ________________________ (e, i + 4, +
(8) Using this result it may be possible to show that if v lies in the class of partitioned preferences, AMP's induced distribution may be exactly the Mallows posterior!
Proposition 9. If v is partitioned, the distribution 1.31, induced by AMP is the Mallows posterior P.

While AMP may have (theoretically) poor worst-case performance, it may be possible to develop a statistically sound sampler MMP by using AMP to propose new rankings for the Metropolis algorithm With Eq. 8, it may be possible to derive the acceptance ratio for Metropolis. At step t + 1 of Metropolis, let r(I) be the previous sampled ranking. Ranking r, proposed by AMP independently of ;Jr), may be accepted as r('') with probability ( m min 1, 11 0 +1,:(10hr;i+1) if cp = 1 ) ____________________ 1. OtherW4SE) ht-243.
1-4) (9) where the /i's and Vs are as in Eq. 6 and 7, respectively (defined w.r.t. r;
and li' and are defined similarly, but w,r.t r(t). Prop. 7 may help show:
Theorem 10. The Markov chain as defined in MMP is ergodic on the class of states OM.
Sampling a Mallows Mixture Posterior Extending the GRIM, AMP and MMP algorithms to sampling from a mixture of Mallows models may be straightforward. The prior art includes relatively little work on probabilistic, models of partial rankings, and the prior art contains no known proposed generative models for arbitrary sets of consistent paired comparisons. One embodiment of the present invention may include such a model while other embodiments may extend algorithms and calculations to sample from a mixture of Mallows models.
It may be assumed that each agent has a latent preference ranking r, drawn from a Mallows mixture with parameters IT = Oris ...,gg). G = (UV ¨ am) and 0 = GPL, ¨ OK).
Embodiments of the present invention may use a component indicator vector z = (z1, ¨ ZR) 6 (UV , drawn from a multinomial with proportions n , which specifies the mixture component from which an agent's ranking is drawn: if zi, = 1, r is sampled from the Mallows model with parameters 617:0PN . The observed data of the present invention may be a preference profile V 7-- Cie.-- vd. It may be possible to let Z = .......Z,;) denote the latent indicators for each agent. To generate ( 's preferencesvt, it may be possible to use a simple distribution, parameterized by c( E [041, that reflects a missing completely at random assumption. (This may not be realistic in all settings, but may serve as a useful starting point for some embodiments of the present invention.) It may be possible to define P(171nc<=cc K(1.¨)1 T((421m)¨ ) if r r E ;
and P(vir, "1: 0 otherwise. It may be possible to view this as a process in which an a-coin is flipped for each pair of items to decide whether that pairwise comparison in r is revealed by v. Taken together, the outcome is the joint distribution:
13(12,r, zi 5, 0, cK) P(r1r, cr)PtrIF, c. 40P6-17).
In embodiments of the present invention it may be possible to sample from the mixture posterior, PCr.Z1 c,0)) cc Mr, ccrfrk c' 175)P(211'). Such embodiments may utilize Gibbs sampling to alternate between r and z, since the posterior does not factor in a way that permits us to draw samples exactly by sampling one variable, then conditionally sampling another. It may be possible to initialize with some z" and r", then repeatedly sample the conditionals of z given r and r given z. For the t-th sample, z(') may be drawn nr dic- It may be from a multinomial with K outcomes: ltr(t-1); k possible to then sample r(' given z('), PVIzt(0),"0 c'c P(; r)PCr17V) ))PzI((0 )) cc 1.[r E 1102)]. if . This form of Mallows posterior sampling may be applied by embodiments of the present invention with AMP or IvIMP.
20E y for Learning Mallows Mixtures Armed with the sampling algorithms derived froth GRIM, the present invention may implement maximum likelihood learning of the parameters , , and 95 of a Mallows mixture using EM. Before implementing an EM algorithm or calculation, it may be necessary to consider the evaluation of log likelihood, which is used to select K or test convergence.

Evaluating Log Likelihood. Log likelihood LAK(T, g.001:V) in models applied by the present invention may be written as:
i K E
X lecit odjrz,ek) Ini E Zkm_l 1 in clvil x (1.¨cc) I Of.
lt.P1' If-7.174E0h) where Zk is the Mallows normalization constant. It may be easy to derive the maximum v 21v$1 5 likelihood estimate for L"?...1117.4(nt ¨ 1). So it may be possible to ignore this additive constant, and focus on the first term in the sum, denoted 0.11;
0,0i10.
Unfortunately, evaluating this theorem may be provably hard:
Theorem 11_ Given profile V = (ul, ¨141), computing the log likelihood -Or, .
OW) is #P-hard.
10 As a result, it may be possible for embodiments of the present invention to consider approximations. It is possible to rewrite 47, c.cbill as 1 in[g I 7õ IF (r hoo)i fr E
and estimate the inner expectations by sampling from the Mallows model However, this can require exponential sample complexity in the worst case (e.g., if K --15 land v is far from c , i.e., 11(v 5) is large, then a sample of exponential size may be expected to ensure v is in the sample). But it is possible to rewrite the summation inside the log as:
i g =X in [XI-4 X
zk . d(r,e0 1 (P7c r7 ' Ã=era k =1 .reC2(rj XOdre.(rµCk) and evaluate rEigv) via importance sampling. It may be possible to generate 20 samples using AMP, then empirically approximate:

mr e o(ve:))ITI K cf o IV(c4(r . a Lk.)) . .(Ter ,-,=- Pvve ) )1(Ad(r.crj,k))/(ri(vre ) (1 o-lk. Oik)].) CI) rn We generate samples ik , .===7ik with AMP i(vh, k' Pk) for -ft 14- 11 and k K , then substitute ki from Eq. 8 into Eq. 10. With some algebraic manipulation it may be possible to obtain the estimate:
_ .. E in. T¨ E E 'irk LEN _ k-_-_,1 =1 mil 11_1 (hykt) {
r m = h (i k t) ,41,--di=-1 2.¨ i 'f' k ¨ 1(Mt) + 1) i rim 1_41.(ikt)_.(fict)+1 I ii--1 1.¨Otti if 0 k -= 1, -otherwise, _ where, h Fk') and /tiko i- are defined in Eqns. 6 and 7 (defined w.r.t. r4.((410)VtDicik,10 .
EM for Mallows Mixtures: Learning a Mallows mixture may be challenging, since even evaluating log likelihood is #P-hard. Embodiments of the present invention may exploit posterior sampling methods to render EM tractable. The EM approach of Neal &
Hinton (1999) may be applied in embodiments of the present invention as follows (recall the invention may not need to consider cc ): The present invention may initialize it parameters with values Ti" old, G old, and 95 od. For the E-step, instead of working directly with the intractable posterior P KO I.L7-if iVit,ir Id, a old, 45 old\) the present invention may use GRIM-based Gibbs sampling (as described herein), to obtain samples 7 6 N . In the M-step, it may be possible to find a (local) maximum, r,= ', g new, 0 new, or , the empirical expectation:

a.7c,17,,g0_m., 95axLaV.¨;C,I InPO,r7),4*-17, 0).
iEN tr-i if each parameter is fully maximized by the present invention in the order 31.
it may be possible to obtain a global maximum.
Of course, exact optimization may be intractable, so the present invention may approximate the components of the M-step. Abusing notation, let indicator vector Z10 denote the mixture component to which the t-th sample of ( belongs, The present invention may partition all agents' samples into such classes: let kJ' '"=ThOk) be the sub-sample of rankings rtCO that belong in the k-th component, i.e., where zr = k . Note that hi =¨lk = nT . It may be possible to rewrite the M-step objective as;
K ik 71 x in r (vivc,t)IP ki )P(T) kil6k = 0;z )PO4 IlEk).
de.1 where t(k.4) is the agent for sample Pki . Embodiments of the present invention may ignore 1'(v1c:01)0k), which may only impact oc; arid it may be possible to know Pk i E n(1`1(kei)). Thus, it may be possible to rewrite the objective as;
K rrt XIltrak d(pkq,ck)in ¨1411 ¨
k=2.E=1 (12) Optimizing 7' . Embodiments of the present invention may apply Lagrange multipliers yields:
lk , 7rk = ti nr (13) Optimizing Embodiments of the present invention may apply the term involving u in K
XEC* kir a 0113 Eq. 12 which is . Since in (Pk is a negative scaling factor, and it may not be possible to optimize the uk independently, it may be possible to obtain:

, . argmin dr_ ffk =
= (14) Optimizing fik may require computing "Kemeny consensus" of the rankings in Sk, an NP-hard problem. Drawing on the notion of local Kemenization (Dwork et al., 2001), the present invention may instead compute a locally optimal Q , where swapping two adjacent items in ak may not be operable to reduce the sum of distances in the Kemeny objective.
Optimizing . Some embodiments of the present invention may optimize QS in Eq.
12.
The objective decomposes into a sum that may permit independent optimization of each 4e. Exact optimization of Ok may be difficult: however, the present invention may use a(Eq.12) d(5k, uk) - i)Ok 110,-1 +
e0k g5k JkLI ¨ y1 - ok) gradient ascent with i=i m +1 where fõ (1 - 00(1 - Ok) complexity of EM In an embodiment of the present invention one iteration of the E-step takes 0 (71T 76ibb20-24stroM2 Km 1 V11)) time where TAigrc. is number of Metropolis steps, rcii,b2 the number of Gibbs steps, and T. is the posterior sample size for each vi .
The M-step takes time 0(Knit2) , dominated by the K tournament graphs used to compute Kemeny consensus.
Application to Non-Parametric Estimation: Lebanon & Mao (2008) propose non-parametric estimators for Mallows models when observations form partitioned preferences_ Indeed, they offer closed-form solutions by exploiting the existence of a closed-form for Mallows normalization with partitioned preferences.
Unfortunately, with general pairwise comparisons, this normalization is intractable unless #13.--R But embodiments of the present invention can use AMP for approximate marginalization to support non-parametric estimation with general preference data Define a joint Cr,.4 ) 10. r =

distribution over 000 ft by 441-12.NIZit, , where Z6 is the Mallows normalization constant. This corresponds to drawing a ranking s uniformly from f)(171), then drawing r from a Mallows distribution with reference ranking s and dispersion P.
The present invention may extend the non-parametric estimator to paired comparisons odcr,E) pts,) ----- ¨1 qb a 0(1,(). r E CX7--)) = ii)(17,01z=
using:teN,SEDC;7i),rEf1(0 . The present invention may approximate p using importance sampling: choose a E il(v) and sample rankings st(1),--sza) from AMP gOCLE. a. = 1), obtaining:
=
1.Aran _ 7 Er=1"'lt IrECNO wtr = =
716 1.-a ET=3. P 1 vi (.5 ('1 .
ieN , where /) is computed using Eq. 8.
z 0+49, Evaluating FEow may also be intractable, but may be approximated using Eq. 10.
Experiments Experiments utilizing the present invention have been undertaken to measure the quality of the AMP algorithm both in isolation and in the context of log likelihood evaluation and EM.
Sampling Quality: Experiments utilizing the present invention assessed how well AMP
approximates the true Mallows posterior P. The present invention may vary parameters m, 0 and sic , and fix a canonical reference ranking u = (1,2,... m). For each parameter setting, the experiment of the present invention generated 20 preferences v using the mixture model, and evaluated the KL-divergence of and Py (normalized by the entropy of Pr). In summary, the experimental results show that AMP can approximate the posterior well, with average normalized KL error ranging from 1-5%, across the parameter ranges tested.

Log Likelihood and EM on Synthetic Data; In summary, the experimentation sampling methods provided excellent approximations of the log likelihood, and EM
successfully reconstructed artificially generated mixtures, using pairvvise preferences as data.
Sushi: The Sushi dataset 20, as partially shown in FIG. 2, consists of 5000 full rankings 5 over 10 varieties of sushi indicating sushi preferences (Kamishima et al., 2005). The experiment used 3500 preferences for training and 1500 for validation. EM
experiments were run by generating revealed paired comparisons for training with various probabilities . To mitigate issues with local maxima the experiment was run on the present invention EM ten times (more than is necessary) for each instance.
FIGs 2a-2c, 10 tables 20-24, show that, even without full preferences, EM may learn well even with only 30-50% of all paired comparisons, though it may degrade significantly at 20%, in part because only 10 items are ranked (still performance at 20% is good when K =
1,2). With K = 6 components a good fit may be found when training on full preferences:
FIG. 2 shows the learned clusters (all with reasonably low dispersion), illustrating interesting 15 patterns (e.g., fatty tuna strongly preferred by all but one group;
strong correlation across groups in preference/dispreference for salmon roe and sea urchin (atypical "fish"): and cucumber roll consistently dispreferred).
Movielens! The experiments applied the EM algorithm and calculations of the present invention to a subset of the Movielens dataset (see www.grouplens.org) to find 20 "preference types" across users. 200 (out of roughly 3900) of the most frequently rated movies were used, as were the ratings of the 5980 users (out of roughly 6000) who rated at least one of these. Integer ratings from 1 to 5 were converted to pairwise preferences in the obvious way (for ties, no preference was added to v). 3986 preferences were used for training and 1994 for validation. The present invention ran EM with number of 25 components K = 1, --- ,20; for each K the present invention was ran EM
20 times to mitigate the impact of local maxima (a lot more than necessary). For each K, the average log likelihood of the best run on the validation set to select K was evaluated. Log likelihoods were approximated using the Monte Carlo estimates (with K * T
120). The C++ implementation of the algorithms and calculations of the present invention gave EM
30 wall clock times of 15-20 minutes (IntelTM Xeon dual-core, 3GHz), certainly practical for a data set of this size. Log likelihood results are shown in FIG. 2 24 as a function of the number of mixture components. This suggests that the best component sizes are and K = 5 on the validation set.
The present invention can incorporate a variety of sets of algorithms and calculations to support the efficient and effective learning of ranking or preference distributions when observed data comprise a set of unrestricted pairwise comparisons of items.
Given the fundamental nature of pairwise comparisons in revealed preference, the present invention may include methods that extend the reach of rank learning in a vital way. In particular, the GRIM algorithm may allow sampling of arbitrary distributions, including Mallows models conditioned on pairwise data. It may support a tractable approximation to the #P-hard problem of log likelihood evaluation of Mallows mixtures: and may form the heart of an EM algorithm or calculation that experiments have shown to be effective.
GRIM
may also be used for non-parametric estimation.
General Systenanplementation The present system and method may be practiced in various embodiments. A
suitably configured computer device, and associated communications networks, devices, software and firmware may provide a platform for enabling one or more embodiments as described above. By way of example, FIG. 5 shows a generic computer device 100 that may include a central processing unit ("CPU") 102 connected to a storage unit 104 and to a random access memory 106. The CPU 102 may process an operating system 101, application program 103, and data 123_ The operating system 101, application program 103, and data 123 may be stored in storage unit 104 and loaded into memory 106, as may be required. Computer device 100 may further include a graphics processing unit (GPU) 122 which is operatively connected to CPU 102 and to memory 106 to offload intensive image processing calculations from CPU 102 and run these calculations in parallel with CPU 102. An operator 107 may interact with the computer device 100 using a video display 108 connected by a video interface 105, and various input/output devices such as a keyboard 110, mouse 112, and disk drive or solid state drive 114 connected by an I/O
interface 109. In known manner, the mouse 112 may be configured to control movement of a cursor in the video display 108, and to operate various graphical user interface (GUI) controls appearing in the video display 108 with a mouse button. The disk drive or solid state drive 114 may be configured to accept computer readable media 116. The computer device 100 may form part of a network via a network interface 111, allowing the computer device 100 to communicate with other suitably configured data processing systems (not shown). One or more different types of sensors 130 may be used to receive input from various sources.
The present system and method may be practiced on virtually any manner of computer device including a desktop computer, laptop computer, tablet computer or wireless handheld. The present system and method may also be implemented as a computer-readable/useable medium that includes computer program code to enable one or more computer devices to implement each of the various process steps in a method in accordance with the present invention. In case of more than computer devices performing the entire operation, the computer devices are networked to distribute the various steps of the operation. It is understood that the terms computer-readable medium Or computer useable medium comprises one or more of any type of physical embodiment of the program code. In particular, the computer-readable/useable medium can comprise program code embodied on one or more portable storage articles of manufacture (e.g. an optical disc, a magnetic disk, a tape, etc.), on one or more data storage portioned of a computing device, such as memory associated with a computer and/or a storage system.
It will be appreciated by those skilled in the art that other variations of the embodiments described herein may also be practiced without departing from the scope of the invention.
Other modifications are therefore possible.

Claims (17)

Claims We claim:
1. A computer implemented method of identifying and ranking preferences regarding a plurality of options, for a group including two or more members, the method comprising:
(a) obtaining from one or more data sources, preference information, including partial preference information, for the members, or a subset of the members, wherein the partial preference information may include a set of pairwise comparisons involving one or more of the options;
(b) analyzing, by operation of one or more server computers, the pairwise comparisons so as to learn one or more statistical models for inferring and ranking a set of preferences based on the partial preference information;
and (c) applying the one or more statistical models so as to identify the set of preferences and rank the options.
2. The method of claim 1, wherein the one or more statistical models are selected to fit with the partial preference information.
3. The method of claim 1, wherein a plurality of mixtures of statistical models for inferring and ranking preferences is selected, thus enabling the formation of clusters consisting of probabilistic distributions applied to segments of the group.
4. The method of claim 1, comprising the further step of automatically determining (i) a series of model parameters that best fit the available preference information for selecting one or more statistical models that best fits the preference information, and (ii) based on the model parameters, selecting one or more applicable statistical models.
5. The method of claim 4, comprising predicting unobserved preferences of specific members.
6. The method of claim I wherein the one or more statistical models include a Mallows model for specifying a probability distribution over a ranking of the choices.
7. The method of claim 6, wherein the Mallows model is specified by a mean ranking reflecting the average preferences of the group plus a dispersion parameter representing the variability of preferences in the group.
8. The method of claim 1, wherein the preference information is obtained from one or more of.
(a) user ratings or comparisons of products/services on an explicit basis;
and (b) user actions such as product selections, social media interactions or clicking on web links, past survey responses, on an implicit basis.
9. The method of claim 4, wherein the model parameters are inferred given observations of user behavior, survey data, or implicit choice data, and the model parameters consists of inferred clusters of users, or preference types, based on partial preference data.
10. A computer network implemented system for identifying and ranking preferences regarding a plurality of options, for a group including two or more members, the system comprising:
(a) one or more server computers, connected to an interconnected network of computers, and linked to a server application;
(b) the server application includes or is linked to an preference inference engine that:
(i) obtains from one or more data sources linked to the one or more server computers, preference information, including partial preference information, for the members, or a subset of the members, wherein the partial preference information may include a set of pairwise comparisons involving one or more of the options;
(ii) analyzing the pairwise comparisons so as to learn one or more statistical models for inferring and ranking a set of preferences based on the partial preference information; and (iii) applying the one or more statistical models so as to identify the preferences arid rank the set of options.
11. The system of claim 10, wherein the one or more statistical models are selected to fit with the partial preference information.
12. The system of claim 10, wherein the server application is operable to automatically determining (i) a series of model parameters that best fit the available preference information for selecting one or more statistical models that best fits the preference information, (ii) based on the model parameters selecting one or more applicable statistical models, and (iii) the inference engine applying the selected one or more applicable statistical models so as to infer a preference set or preference ranking.
13. The system of claim 10, wherein the inference engine is operable to predict unobserved preferences of specific members.
14. The system of claim 10, wherein the one or more statistical models include a Mallows model for specifying a probability distribution over a ranking of the choices.
15. The system of claim 14, wherein the Mallows model is specified by a mean ranking reflecting the average preferences of the group plus a dispersion parameter representing the variability of preferences in the group.
16. The method of claim 1, wherein identifying the preferences enables the prediction of the preferences, using the pairwise comparisons and application of a Mallows model/mixture.
17. The system of claim 10, operable to enable the prediction of the preferences, by applying a Mallows model/mixture to the pairwise comparisons.
CA2828490A 2011-03-08 2012-03-08 System and method for identifying and ranking user preferences Abandoned CA2828490A1 (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US201161450286P 2011-03-08 2011-03-08
US61/450,286 2011-03-08
PCT/CA2012/000229 WO2012119245A1 (en) 2011-03-08 2012-03-08 System and method for identifying and ranking user preferences

Publications (1)

Publication Number Publication Date
CA2828490A1 true CA2828490A1 (en) 2012-09-13

Family

ID=46797368

Family Applications (1)

Application Number Title Priority Date Filing Date
CA2828490A Abandoned CA2828490A1 (en) 2011-03-08 2012-03-08 System and method for identifying and ranking user preferences

Country Status (3)

Country Link
US (1) US9727653B2 (en)
CA (1) CA2828490A1 (en)
WO (1) WO2012119245A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200057950A1 (en) * 2012-07-09 2020-02-20 Ringit, Inc. Personal Taste Assessment Method and System

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9020945B1 (en) * 2013-01-25 2015-04-28 Humana Inc. User categorization system and method
US20150294230A1 (en) * 2014-04-11 2015-10-15 Xerox Corporation Methods and systems for modeling cloud user behavior
KR101605654B1 (en) * 2014-12-01 2016-04-04 서울대학교산학협력단 Method and apparatus for estimating multiple ranking using pairwise comparisons
CN105550211A (en) * 2015-12-03 2016-05-04 云南大学 Social network and item content integrated collaborative recommendation system
US20170316442A1 (en) * 2016-02-09 2017-11-02 Hrl Laboratories, Llc Increase choice shares with personalized incentives using social media data
US20190130448A1 (en) * 2017-10-27 2019-05-02 Dinabite Limited System and method for generating offer and recommendation information using machine learning
US10599769B2 (en) * 2018-05-01 2020-03-24 Capital One Services, Llc Text categorization using natural language processing
CN111383060A (en) * 2020-03-18 2020-07-07 浙江大搜车软件技术有限公司 Vehicle price determination method and device, electronic equipment and storage medium
US20210383486A1 (en) * 2020-06-09 2021-12-09 Jessica Jane Robinson Real-time data stream for carbon emission aggregation
CA3194695A1 (en) 2020-10-01 2022-04-07 Thomas KEHLER Probabilistic graphical networks

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7860818B2 (en) * 2006-06-29 2010-12-28 Siemens Corporation System and method for case-based multilabel classification and ranking
WO2012119242A1 (en) * 2011-03-04 2012-09-13 Tian Lu Method and system for robust social choices and vote elicitation

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200057950A1 (en) * 2012-07-09 2020-02-20 Ringit, Inc. Personal Taste Assessment Method and System

Also Published As

Publication number Publication date
US20140181102A1 (en) 2014-06-26
WO2012119245A1 (en) 2012-09-13
US9727653B2 (en) 2017-08-08

Similar Documents

Publication Publication Date Title
CA2828490A1 (en) System and method for identifying and ranking user preferences
Restuccia et al. Quality of information in mobile crowdsensing: Survey and research challenges
Chaney et al. How algorithmic confounding in recommendation systems increases homogeneity and decreases utility
Kokkodis et al. Reputation transferability in online labor markets
Wu et al. Modeling the evolution of users’ preferences and social links in social networking services
Ma et al. Learning to recommend with trust and distrust relationships
Symeonidis et al. Product recommendation and rating prediction based on multi-modal social networks
Wang et al. Is it time for a career switch?
Dakhel et al. A new collaborative filtering algorithm using K-means clustering and neighbors' voting
Bakshi et al. Enhancing scalability and accuracy of recommendation systems using unsupervised learning and particle swarm optimization
Panda et al. A collaborative filtering recommendation algorithm based on normalization approach
Singh et al. Influence maximization frameworks, performance, challenges and directions on social network: A theoretical study
He et al. A joint context-aware embedding for trip recommendations
US11704324B2 (en) Transition regularized matrix factorization for sequential recommendation
Pramanik et al. Deep learning driven venue recommender for event-based social networks
Xin et al. A location-context awareness mobile services collaborative recommendation algorithm based on user behavior prediction
Kang et al. Task recommendation in crowdsourcing based on learning preferences and reliabilities
Ulz et al. Human computation for constraint-based recommenders
Yang et al. Restricted Boltzmann machines for recommender systems with implicit feedback
Girase et al. Role of matrix factorization model in collaborative filtering algorithm: a survey
Ifada et al. How relevant is the irrelevant data: leveraging the tagging data for a learning-to-rank model
Li et al. An improved collaborative filtering approach based on user ranking and item clustering
Yu et al. Attributes coupling based item enhanced matrix factorization technique for recommender systems
Yue et al. A parallel and constraint induced approach to modeling user preference from rating data
Hu et al. An uncertainty quantification approach for agent-based modeling of human behavior in networked anagram games

Legal Events

Date Code Title Description
FZDE Discontinued

Effective date: 20170308