US20060277591A1 - System to establish trust between policy systems and users - Google Patents

System to establish trust between policy systems and users Download PDF

Info

Publication number
US20060277591A1
US20060277591A1 US11/145,775 US14577505A US2006277591A1 US 20060277591 A1 US20060277591 A1 US 20060277591A1 US 14577505 A US14577505 A US 14577505A US 2006277591 A1 US2006277591 A1 US 2006277591A1
Authority
US
United States
Prior art keywords
trust
policy
level
operational
application
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US11/145,775
Inventor
William Arnold
Hoi Chan
Alla Segal
Ian Whalley
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
International Business Machines Corp
Original Assignee
International Business Machines Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by International Business Machines Corp filed Critical International Business Machines Corp
Priority to US11/145,775 priority Critical patent/US20060277591A1/en
Assigned to IBM CORPORATION reassignment IBM CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: WHALLEY, IAN N., ARNOLD, WILLIAM C., CHAN, HOI YEUNG, SEGAL, ALLA
Publication of US20060277591A1 publication Critical patent/US20060277591A1/en
Priority to US12/545,167 priority patent/US7958552B2/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/62Protecting access to data via a platform, e.g. using keys or access control rules

Definitions

  • the present invention relates to policy-based computing systems.
  • business rules and policies to externalize business and operational logic from an application is an important concept and approach to building large business applications and to new areas such as self-managing systems or autonomic computing systems.
  • Business rules and policies are statements that are intended to be readable and modifiable by non-technical users and executable by an underlying mechanism such as a rule engine or a Java Virtual Machine (JVM), allowing application logic to be authored and modified external to the application.
  • a rule engine or a Java Virtual Machine (JVM)
  • a business rule set is a collection of rules selected and arranged to achieve a desired goal. Assigning a priority to each rule contained in the rule set controls the sequence of execution of those rules in the rule set.
  • priorities are initially established and assigned by a rule author; however, priority of the rules can be subsequently modified in accordance with application specific parameters, i.e. different situations and execution environments.
  • policy-based systems have become increasingly common. For example, the emerging areas of autonomic and on demand computing are accelerating the adoption of policy-based systems. As the requirements on policy-based systems become more complex, traditional approaches to the implementation of such systems, for example relying entirely on simple “if [condition] then [actions]” rules, become insufficient. New approaches to the design and implementation of policy-based systems have emerged, including goal policies, utility functions, data mining, reinforcement learning and planning.
  • policy-based systems One issue regarding the use or implementation of policy-based systems is establishing the same level of trust among users and system administrators for policy-based systems as exists for traditional systems. Unless policy-based systems are trusted at least as much as traditional systems, increases in the acceptance level of policy-based systems will be hindered. In addition, a system administrator needs to know that a policy-based system will help the administrator's system perform better. Unfortunately, current approaches to the design and implementation of policy-based systems do nothing to reduce administrators' skepticism towards policy-based automation.
  • trust In general, trust can be viewed as an abstract concept that involves a complex combination of fundamental qualities such as reliability, competence, dependability, confidence and integrity. Research has been conducted in the area of multi-agent systems on the concept of trust. In this research, trust is defined quantitatively as the level of dependability and competence associated with a given software agent as compared to other similar software agents. As policy-based systems evolved from the use of relatively simple “if/then” rules to more sophisticated and powerful components that utilize goals and utility function policies, data mining and reinforcement learning among others, the level of trust associated with a given policy-based system has become an important factor in determining the use of that policy-based system as an integral part of overall systems management.
  • IT Information Technology
  • the present invention is directed to systems and methods that provide for the establishment of trust between a user and a policy based system. Instead of earning trust over a lengthy period of positive user experiences, a systematic approach is used where trust is established gradually through a mix of operation strategies with user interaction and feedback learning.
  • the concept of “trust” is introduced into the policy-based system by assigning a value to each execution of each policy with respect to the policy-based system. This value is called the instantaneous trust index (ITI).
  • ITI instantaneous trust index
  • Each policy-based system can contain a number of separate policies, and each policy in the policy-based system has an associated ITI.
  • an ITI is generated for each execution of a given policy within the policy based system.
  • the ITI's for each one of a plurality of policies, for each execution of a given policy or for both are combined into the overall trust index (OTI) for a given policy or for a given policy-based system.
  • OTI overall trust index
  • the OTI for a policy or policy-based system reflects the level of trust that a user, for example an administrator with expert domain knowledge, has in a particular policy or group of policies.
  • the established OTI is can be included associated with the policy, for example as a parameter included with each policy; therefore, the user can examine the OTI when selecting a policy to be used. For example, the user can select the policy having the highest trust level, i.e. OTI, from among a group of policies.
  • Suitable methods for computing the ITI include, for example, examining what fraction of actions suggested from the execution of a particular policy rule the user accepts unchanged or by examining the extent to which the user changes or modifies the suggested actions.
  • reinforcement learning techniques are used in combination with the ITI and OTI so that a given policy or policy-based system can adjust its behavior to maximum its trust index, i.e. to increase the trust of the user.
  • FIG. 1 is a block diagram representation of a policy system for use with the present invention
  • FIG. 2 is a block diagram representation of an embodiment of a combination policy system and trust component in accordance with the present invention
  • FIG. 3 is a graph illustrating the trust index of a given policy over time
  • FIG. 4 is a flow chart illustrating an embodiment of a reinforcement learning feedback loop for use in accordance with the present invention.
  • an exemplary embodiment of a policy or policy-based system for use with trust building tools 10 in accordance with the present invention is illustrated.
  • an application 12 is interfaced with a policy implementation and enforcement system 16 that provides for the automated implementation of rules and policies to control or to modify the application.
  • the policy system monitors decision points 14 within the application and uses these decision points to develop decisions 18 regarding actions to be taken within the application to implement pre-determined and user-inputted policies contained within the policy system.
  • a user or system administrator is responsible for the operation of the application. However, implementation of the actions decided upon by the policy system affect the operation of the application, and the policy system is constructed to operate autonomously and without the need for user input or oversight.
  • the responsible user or administrator is responsible for the actions implemented by the policy system, requiring the user to trust the policy system to develop and implement actions that will benefit the application.
  • Suitable users include any user of policy-based systems including system administrators and persons with expert domain knowledge. This trust between the user and the policy-based system is established by coupling a decision-modifying trust component to the policy-based system.
  • At least one policy for example from a policy-based system containing a plurality of policies, that is capable of governing operational aspects of the application that is being controlled by the policy-based system is identified.
  • a plurality of policies is identified, and each identified policy is capable of governing operational aspects of the application.
  • trust is an expression of the level of trust that a user or system administrator that is responsible for the operation of the application has in the policy.
  • trust is introduced into a policy-based system by determining the level of trust, i.e. user trust, associated with using the identified policy to govern the operational aspects of the application.
  • the level of trust is determined with user-defined criteria. Suitable user-defined criteria include, but are not limited to, reliability of the policy and dependability of the policy.
  • a new level of trust is determined for the policy upon each use of the application to govern the operational aspects of the application. All of these separate levels of trust for the same policy can be combined or aggregated into an overall trust level. For example, an instantaneous trust index (ITI) is assigned to each execution of each policy with respect to a policy-based system. For a single given policy, the ITI associated with each execution is combined into an overall trust index (OTI) for that policy, for example by averaging the ITI's over a period of time.
  • ITI instantaneous trust index
  • OTI overall trust index
  • a level of trust is determined for each identified policy in the plurality of policies.
  • the level of trust for each one of the identified polices is then combined into an overall trust level.
  • the ITI associated with each policy for a plurality of policies in a given policy-based system are combined into an OTI for that policy-based system. Therefore, the OTI is an expression of the level of trust that a given user has in a particular policy or group of policies for a given occurrence or application of the plurality of policies.
  • the determined level of trust is associated with the identified policy and used as a parameter by the user or system administrator in determining when to select and use the policy, i.e. the level of trust is used like a priority in the policy-based system.
  • this determined level of trust is used to select an operational trust state that defines the level of autonomy with which the policy-based system operates to govern the operational states of the application.
  • An increased level of trust corresponds to an increased level of autonomy
  • a decreased level of trust corresponds to a lower level of autonomy.
  • the level of trust can be the level of trust associated with a single occurrence of a single policy, the overall trust level associated with multiple occurrences of a single policy or the overall trust level associated with the use of multiple policies.
  • the operational trust level controls the amount of input or interaction a user provides during implementation of the policy.
  • the operational trust state can be selected from among a plurality of operational trust states. These operational trust states include, but are not limited to, a fully supervised trust state, a partially modifiable trust state and an unsupervised, full trust state.
  • this operational trust state can be varied over time in response to changes in the level of trust associated with a given policy.
  • the selected operational trust state is increased in response to an increase in the level of trust.
  • the selected operational trust state is decreased in response to a decrease in the level of trust.
  • a given determined level of trust is associated with a particular operating mode of the policy-based system. Suitable operating modes include automatic modes and manual modes. The level of trust is changed by changing the operating mode.
  • Policies are implemented in the application by creating recommended actions that affect the operating conditions of the application to be consistent with the policies.
  • at least one policy recommended action is identified to affect the operational aspects of the application upon implementation.
  • a plurality of policy recommended actions is identified. These recommended actions can be implemented as recommended, not implement or modified prior to implementation.
  • the disposition or modification of the recommended actions, including the quantity and quality of any modifications is taken into account when calculating a level of trust associated with the policy that produced the recommended actions.
  • the identified modifications are used to calculate the ITI. Methods for computing the ITI include, for example, calculating the fraction of actions suggested from the execution of a particular policy rule that are actually accepted and implemented by the user unchanged.
  • any changes made by the user to the suggested actions of the policy rule are examined, and a value or weight is assigned that correlates to the extensiveness of the changes or the relationship between the action as suggested and the action as implemented.
  • a value or weight is assigned to any suggested action of the policy that is completely disregarded by the user.
  • the function ITI ⁇ (m 1 ,m 2 , . . . ,m n ) is normalized such that 0 ⁇ ITI ⁇ 1.
  • ⁇ 1 ( ) represents a moving or historical average and is normalized such that 0 ⁇ OTI ⁇ 1.
  • trust either ITI or OTI is represented as a number between 0 and 1.
  • trust is defined as an aggregate of its individual attributes, for example reliability, competence and dependability. These attributes can be user-defined. Each of these attributes is measured individually due to different application requirements. Important information could potentially be lost if these various aspects are combined or reduced into a single scalar quantity.
  • the number of users and policies involved exceeds a certain threshold, interactions among the various aspects can be difficult to coordinate.
  • FIG. 2 An exemplary embodiment of a policy system in combination a trust component 20 in accordance with the present invention is illustrated in FIG. 2 .
  • the combination policy system and trust building tools includes a policy system 22 in combination with a decision-modifying trust component 24 .
  • Suitable policy systems include any type of policy system known and available in the art. For example, the policy system enforces policies and business rules in accordance with a pre-determined ranking system.
  • the decision-modifying trust component provides the calculation and application of ITI and OTI with respect to a given policy or group of policies applied by the policy system.
  • the decision-modifying trust component includes an initial trust decision 26 for each policy or group of policies that are assigned an OTI.
  • the initial trust decision is performed automatically based upon an associated ITI or OTI.
  • the initial decision is performed manually by the user by placing the system into a desired trust mode at will on a per-policy basis. Whether the initial decision is performed automatically or manually, the combined system is placed into one of a plurality of trust modes. As illustrated, three different trust modes are possible, minimal trust or supervised mode 34 , partial trust or modify mode 30 , and full trust or automatic mode 28 . Although illustrated with three trust level modes, systems in accordance with the present invention can have more or less than three trust modes. Having more than three trust modes provides greater fine tuning of the desired trust mode.
  • the selected trust mode determines how the actions chosen by the policy system are executed.
  • the actions recommended by the policy system pass through to a final decision 36 without active input or modification from the user or system administrator.
  • the final decision then implements those actions 42 .
  • user modifications 32 are made to the actions recommended by the policy system, and the modified actions are forwarded to the final decision 36 system for implementation as modified.
  • the final decision system 36 reports the details of any changes or modifications to the recommended actions, together with the conditions under which such modifications were made, to a knowledge base (KB) 38 in communication with the final decision system 36 .
  • the modifications and conditions are recorded by the KB, and the KB uses these data to generate and update the appropriate ITI's and OTI's, which are stored in a trust index database 40 .
  • the KB also uses reinforcement learning algorithms to adjust the behavior of a given policy or set of policies to maximize the ITI or OTI.
  • a trust weighted value is assigned to each policy recommended action to maximize the likelihood of the policy being accepted by a user and to increase the overall trust level of the policy-based system. Therefore, the policy system 22 modifies its behavior so as to increase the level of user trust in that policy system.
  • the trust index database 40 is also in communication with the policy system. Therefore, for a given policy or set of polices the policy system creates subsequent policy-recommended actions having increased trust, preferably causing subsequent actions to progress up through the trust modes from minimum trust to full trust.
  • a monitoring system 44 for example a computer is provided to allow user monitoring and control of the system 20 .
  • the monitoring system is used to display the determined level of trust for a given policy. The displayed level of trust is utilized by the user or administrator in selecting a given policy for use in governing the operational aspects of the application.
  • the policy-recommended actions will initially be handled in the minimum trust mode, because no trust has been established or built-up between the policy system and the user.
  • the policy-based system 22 uses the prescribed policies to generate recommended actions. These actions, however, are not automatically executed. Instead, the user examines and reviews the recommended actions, for example using the monitoring system 44 . The user can accept the recommended actions as recommended, propose modifications to the recommended actions, ignore the recommended actions and propose a separate set of actions or decide not to implement any actions.
  • a level of trust value is assigned to the policy execution. For example, if for the execution of a given policy, the policy system recommended actions are accepted without modification, then the highest trust value is associated with that execution of the policy. For example, an ITI of 1 is assigned for this policy execution. Conversely, an ITI of 0 is assigned for the current policy execution if either all policy system-recommended actions are ignored by the user and completely replaced by the user-defined actions or no actions are implemented by the user. Otherwise, an ITI is assigned to the current policy execution as specified by a pre-determined function of the amount of modification. This function takes into account parameters that describe the quality and quantity of the modifications including the number of modifications, type of modifications and extent of modifications. The functions can express linear or higher order relationships between the modifications and the assigned ITI value. In addition to evaluating the type and quantity of the modifications, user provided explanation of the modifications can be provided and considered in determining and appropriate ITI.
  • the combination policy-based system and trust component works to increase the ITI of each policy so that the overall OTI for the given operational trust mode evolves toward the highest level of trust, which is represent by the OTI value of about 1.
  • the operational trust mode of the system is elevated to the next level either manually by the user or automatically for a given policy.
  • the operational trust mode can be elevated from minimum trust to partial trust.
  • the level of trust in any given policy is relatively low, because there is no historical record or experience in operating the policy at the higher and more relaxed trust mode. Therefore, the ITI associated with the next policy is adjusted to express this relatively minimum level of trust in the policy in the current trust mode.
  • the ITI is set at about 0.
  • graphical illustration 46 of the trust index 48 i.e. ITI
  • time 49 is illustrated for a given policy 50 is illustrated.
  • the graphical illustration provides a graphical history of ITI over time for a particular policy, illustrating the long-term trust pattern of a policy.
  • the ITI varies overtime between about 0 and about 1, which are the defined boundaries for the functions that express the trust index.
  • the plot 54 increases over time as the level of trust increases for the policy at a given trust mode. When the trust mode is changed or increased there is an associate decrease in trust index. The general trend, however, is for the trust index value to increase over time towards the value of 1.
  • the partial trust mode user modifications to the policy-recommended actions are mode.
  • user modifications of the recommended actions when in the partial trust mode are limited.
  • the recommended actions themselves cannot be modified or deleted by the user, and only the parameters to those actions can be modified.
  • review and adjustment of the recommended actions parameters can be handled by less expert users, because the balance of the rule has been delegated to the policy system.
  • the ITI for a given execution of the policy is computed based on the quality and quantity of changes. If recommended actions are accepted and applied unchanged, the ITI is 1. If modifications to the recommended actions are made, the ITI is assigned an amount specified by a pre-determined or expert-defined function of the amount of modification.
  • the trust operating mode for a given policy can be adjusted upwards again to the next higher level of trust, i.e. the full trust mode. This adjustment can be made either automatically or manually.
  • the full trust mode the user has relatively strong confidence in the policy and the policy system.
  • modifications to the recommended policy actions are not made.
  • the system continues to monitor the overall OTI, and if the OTI falls below a pre-defined critical level, the policy system can revert to lower level trust modes for a given policy.
  • the policy system In the full trust or automatic mode, the policy system is given full authority to define and implement the actions for a particular policy without user intervention. User review of the executed actions, however, can still be provided.
  • a summary is generated for each policy execution, and the user examines this summary periodically. Based upon the examination, the user can decide whether or not to leave the system in full trust mode or to switch the system back to the partial trust mode or the minimal trust mode for a particular policy. Absent intervention from the user, an ITI of 1 is awarded for each policy execution.
  • ITI's of 0 are assigned, either for all policies, or if records suffice, for the policies which the user decided were unreliable, in sufficient numbers to drive the OTI for each policy to a level typical of the mode of operation to which the user switches the system.
  • An OTI that is sufficiently close to 1 indicates that the user trusts the policy (and the policy system) to a high degree.
  • the user periodically examines the summary and allows the policy system to run autonomously.
  • exemplary systems in accordance with the present invention can utilize more advanced learning techniques to modify system behavior, for example based upon the actions of the user in response to suggested actions, in order to obtain the trust of the user, e.g. to increase the OTI's.
  • a variety of reinforcement learning algorithms can be used. Suitable available reinforcement techniques are described in L. P. Kaelbling, M. Littman, A. Moore, “Reinforcement Learning: A Survey”, Journal of Artificial Intelligence Research, Volume 4, 1996,; which is incorporated herein by reference in its entirety.
  • a reinforcement learning process as a feedback loop from information extracted from user interaction to the policy evaluation system 56 is illustrated.
  • the policy evaluation system 58 generates policy decisions 60 , for example in the form of recommended actions.
  • the recommended actions are selected so as to increase the level or trust between the user and the policy system.
  • the recommended policy decisions may or may not be subject to user modifications 62 , and a reinforcement learning system 64 monitors these modifications and provides an evaluation of these modifications back to the policy system in the form of a feedback loop 65 .
  • This feedback loop provides the evaluation of user modifications to the policy system for use in making policy decision recommendations. Therefore, the reinforcement learning evaluation is use to further increase the level of trust between the user and the policy system.
  • a policy rule produces a set of recommended actions.
  • new actions can be added by the system if the user overrides the recommended actions.
  • Each recommended action has an associated action acceptance value (AAV) that is a number between 0 and 1.
  • AAV expresses the likelihood that a given recommended action will be accepted by the user.
  • the AAV for each recommended action is adjusted through the reinforcement process so as to earn the highest possible reward from the user. For example, the policy system attempts to maximize the ITI by suggesting the actions with the highest AAV.
  • a recommended action's AAV increases as it is selected by the user and decreases as it is deselected by the user.
  • a load adjustment policy which adjusts the loading of the information technology (IT) assets including servers, storage devices and switches based on client specified requirements and currently available assets, is running in minimum trust mode.
  • the OTI is about 0.49 as calculated from 6 iterations of policy execution, and the threshold for advancing to the next trust mode is an OTI of ⁇ about 0.5.
  • the policy system recommends three actions, each action having an associated AAV. The first action is to deploy two additional servers. The second action is to increase buffer storage by 50% for certain group of clients, for example “GOLD” clients. The third action is to suspend processing of all batch jobs.
  • Actions 1, 2, and 3 carry modification weights of 0.5, 0.3, and 0.2 respectively and AAV's of 0.9, 0.5, and 0.4 respectively.
  • an administrator accepts actions 1 and 3 for execution.
  • the ITI for this instance of policy execution is 0.7, where the ITI is the sum of the modification weights of each accepted action.
  • This ITI is added to the computation of the OTI for the load adjustment policy, resulting in an OTI of 0.52, enabling advancement of the policy system to the partial trust mode.
  • the present invention is also directed to a computer readable medium containing a computer executable code that when read by a computer causes the computer to perform a method for establishing and increasing trust between a user and a policy system in accordance with the present invention and to the computer executable code itself.
  • the computer executable code can be stored on any suitable storage medium or database, including databases in communication with and accessible by any component used in accordance with the present invention, and can be executed on any suitable hardware platform as are known and available in the art.

Abstract

A system and method are provided to establish trust between a user and a policy system that generates recommended actions in accordance with specified policies. Trust is introduced into the policy-based system by assigning a value to each execution of each policy with respect to the policy-based system, called the instantaneous trust index. The instantaneous trust indices for each one of the policies, for the each execution of a given policy or for both are combined into the overall trust index for a given policy or for a given policy-based system. The recommended actions are processed in accordance with the level or trust associated with a given policy as expressed by the trust indices. Manual user input is provided to monitor or change the recommended actions. In addition, reinforcement learning algorithms are used to further enhance the level of trust between the user and the policy-based system.

Description

    CROSS REFERENCE TO RELATED APPLICATIONS
  • Pursuant to 35 U.S.C. §119(e), the present application claims priority to co-pending provisional application No. 60/686,471 filed Jun. 1, 2005. The entire disclosure of that application is incorporated herein by reference.
  • FIELD OF THE INVENTION
  • The present invention relates to policy-based computing systems.
  • BACKGROUND OF THE INVENTION
  • The use of business rules and policies to externalize business and operational logic from an application is an important concept and approach to building large business applications and to new areas such as self-managing systems or autonomic computing systems. Business rules and policies are statements that are intended to be readable and modifiable by non-technical users and executable by an underlying mechanism such as a rule engine or a Java Virtual Machine (JVM), allowing application logic to be authored and modified external to the application.
  • One of the key aspects of using these business rules or policies is the ability to specify a priority for each of the rules in a set of business rules. A business rule set is a collection of rules selected and arranged to achieve a desired goal. Assigning a priority to each rule contained in the rule set controls the sequence of execution of those rules in the rule set. Typically, priorities are initially established and assigned by a rule author; however, priority of the rules can be subsequently modified in accordance with application specific parameters, i.e. different situations and execution environments.
  • The use of policy-based systems has become increasingly common. For example, the emerging areas of autonomic and on demand computing are accelerating the adoption of policy-based systems. As the requirements on policy-based systems become more complex, traditional approaches to the implementation of such systems, for example relying entirely on simple “if [condition] then [actions]” rules, become insufficient. New approaches to the design and implementation of policy-based systems have emerged, including goal policies, utility functions, data mining, reinforcement learning and planning.
  • One issue regarding the use or implementation of policy-based systems is establishing the same level of trust among users and system administrators for policy-based systems as exists for traditional systems. Unless policy-based systems are trusted at least as much as traditional systems, increases in the acceptance level of policy-based systems will be hindered. In addition, a system administrator needs to know that a policy-based system will help the administrator's system perform better. Unfortunately, current approaches to the design and implementation of policy-based systems do nothing to reduce administrators' skepticism towards policy-based automation.
  • In general, trust can be viewed as an abstract concept that involves a complex combination of fundamental qualities such as reliability, competence, dependability, confidence and integrity. Research has been conducted in the area of multi-agent systems on the concept of trust. In this research, trust is defined quantitatively as the level of dependability and competence associated with a given software agent as compared to other similar software agents. As policy-based systems evolved from the use of relatively simple “if/then” rules to more sophisticated and powerful components that utilize goals and utility function policies, data mining and reinforcement learning among others, the level of trust associated with a given policy-based system has become an important factor in determining the use of that policy-based system as an integral part of overall systems management. Information Technology (IT) managers are likely to be hesitant to trust an autonomous policy-based system to run the entire IT operations without first establishing a certain level of trust in that autonomous policy-based system. Therefore, trust between a policy-based system and the users of that system is needed to encourage adoption and implementation of a given policy-based system.
  • Current issues regarding trust in policy-based systems have concentrated on user interface issues. In R. Barrett, People and Policies, Policies for Distributed Systems and Networks (2004), the necessity of gaining a user's trust is discussed as are ways to make policy-based systems trustworthy. E. Kandogan and P. Maglio, Why Don't You Trust Me Anymore? Or the Role of Trust in Troubleshooting Activity of System Administrators, Conference on Human Computer Interaction (2003), addresses the role of trust in the work of system administrators. Again, the majority of this work focuses on user interface matters, rather than on the design and operation of the system itself. Very few studies have been conducted on the issue of trust between users and software systems where the actions of the software systems are determined via pre-scribed policies or other autonomous mechanisms. In addition, no general tools are available that allow a policy system to earn a user's trust.
  • SUMMARY OF THE INVENTION
  • The present invention is directed to systems and methods that provide for the establishment of trust between a user and a policy based system. Instead of earning trust over a lengthy period of positive user experiences, a systematic approach is used where trust is established gradually through a mix of operation strategies with user interaction and feedback learning.
  • The concept of “trust” is introduced into the policy-based system by assigning a value to each execution of each policy with respect to the policy-based system. This value is called the instantaneous trust index (ITI). Each policy-based system can contain a number of separate policies, and each policy in the policy-based system has an associated ITI. In addition, an ITI is generated for each execution of a given policy within the policy based system. The ITI's for each one of a plurality of policies, for each execution of a given policy or for both are combined into the overall trust index (OTI) for a given policy or for a given policy-based system. The OTI for a policy or policy-based system reflects the level of trust that a user, for example an administrator with expert domain knowledge, has in a particular policy or group of policies. The established OTI is can be included associated with the policy, for example as a parameter included with each policy; therefore, the user can examine the OTI when selecting a policy to be used. For example, the user can select the policy having the highest trust level, i.e. OTI, from among a group of policies.
  • Suitable methods for computing the ITI include, for example, examining what fraction of actions suggested from the execution of a particular policy rule the user accepts unchanged or by examining the extent to which the user changes or modifies the suggested actions. In addition, reinforcement learning techniques are used in combination with the ITI and OTI so that a given policy or policy-based system can adjust its behavior to maximum its trust index, i.e. to increase the trust of the user.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 is a block diagram representation of a policy system for use with the present invention;
  • FIG. 2 is a block diagram representation of an embodiment of a combination policy system and trust component in accordance with the present invention;
  • FIG. 3 is a graph illustrating the trust index of a given policy over time; and
  • FIG. 4 is a flow chart illustrating an embodiment of a reinforcement learning feedback loop for use in accordance with the present invention.
  • DETAILED DESCRIPTION
  • Referring initially to FIG. 1, an exemplary embodiment of a policy or policy-based system for use with trust building tools 10 in accordance with the present invention is illustrated. As illustrated an application 12 is interfaced with a policy implementation and enforcement system 16 that provides for the automated implementation of rules and policies to control or to modify the application. The policy system monitors decision points 14 within the application and uses these decision points to develop decisions 18 regarding actions to be taken within the application to implement pre-determined and user-inputted policies contained within the policy system. A user or system administrator is responsible for the operation of the application. However, implementation of the actions decided upon by the policy system affect the operation of the application, and the policy system is constructed to operate autonomously and without the need for user input or oversight. Therefore, the responsible user or administrator is responsible for the actions implemented by the policy system, requiring the user to trust the policy system to develop and implement actions that will benefit the application. Suitable users include any user of policy-based systems including system administrators and persons with expert domain knowledge. This trust between the user and the policy-based system is established by coupling a decision-modifying trust component to the policy-based system.
  • In one exemplary embodiment in accordance with the present invention, at least one policy, for example from a policy-based system containing a plurality of policies, that is capable of governing operational aspects of the application that is being controlled by the policy-based system is identified. Alternatively, a plurality of policies is identified, and each identified policy is capable of governing operational aspects of the application.
  • Methods in accordance with the present invention introduce an aspect of trust that is associated with using that identified policy to govern the operational aspects of the application. In one embodiment, trust is an expression of the level of trust that a user or system administrator that is responsible for the operation of the application has in the policy. In one embodiment, trust is introduced into a policy-based system by determining the level of trust, i.e. user trust, associated with using the identified policy to govern the operational aspects of the application. In one embodiment, the level of trust is determined with user-defined criteria. Suitable user-defined criteria include, but are not limited to, reliability of the policy and dependability of the policy.
  • Since a given policy can be repeatedly used or applied, a new level of trust is determined for the policy upon each use of the application to govern the operational aspects of the application. All of these separate levels of trust for the same policy can be combined or aggregated into an overall trust level. For example, an instantaneous trust index (ITI) is assigned to each execution of each policy with respect to a policy-based system. For a single given policy, the ITI associated with each execution is combined into an overall trust index (OTI) for that policy, for example by averaging the ITI's over a period of time.
  • When a plurality of policies is identified, a level of trust is determined for each identified policy in the plurality of policies. The level of trust for each one of the identified polices is then combined into an overall trust level. For example, the ITI associated with each policy for a plurality of policies in a given policy-based system are combined into an OTI for that policy-based system. Therefore, the OTI is an expression of the level of trust that a given user has in a particular policy or group of policies for a given occurrence or application of the plurality of policies. In one embodiment, the determined level of trust is associated with the identified policy and used as a parameter by the user or system administrator in determining when to select and use the policy, i.e. the level of trust is used like a priority in the policy-based system.
  • Having determined the level of trust associated with the policy or group of policies, this determined level of trust is used to select an operational trust state that defines the level of autonomy with which the policy-based system operates to govern the operational states of the application. An increased level of trust corresponds to an increased level of autonomy, and a decreased level of trust corresponds to a lower level of autonomy. The level of trust can be the level of trust associated with a single occurrence of a single policy, the overall trust level associated with multiple occurrences of a single policy or the overall trust level associated with the use of multiple policies. In one embodiment, the operational trust level controls the amount of input or interaction a user provides during implementation of the policy. For example, the operational trust state can be selected from among a plurality of operational trust states. These operational trust states include, but are not limited to, a fully supervised trust state, a partially modifiable trust state and an unsupervised, full trust state.
  • Although an initial operational trust state is determined, this operational trust state can be varied over time in response to changes in the level of trust associated with a given policy. In one embodiment, the selected operational trust state is increased in response to an increase in the level of trust. Conversely, the selected operational trust state is decreased in response to a decrease in the level of trust. In one embodiment, a given determined level of trust is associated with a particular operating mode of the policy-based system. Suitable operating modes include automatic modes and manual modes. The level of trust is changed by changing the operating mode.
  • Policies are implemented in the application by creating recommended actions that affect the operating conditions of the application to be consistent with the policies. In one embodiment, at least one policy recommended action is identified to affect the operational aspects of the application upon implementation. In another embodiment, a plurality of policy recommended actions is identified. These recommended actions can be implemented as recommended, not implement or modified prior to implementation. In one embodiment, the disposition or modification of the recommended actions, including the quantity and quality of any modifications is taken into account when calculating a level of trust associated with the policy that produced the recommended actions. In one embodiment, the identified modifications are used to calculate the ITI. Methods for computing the ITI include, for example, calculating the fraction of actions suggested from the execution of a particular policy rule that are actually accepted and implemented by the user unchanged. In another embodiment, any changes made by the user to the suggested actions of the policy rule are examined, and a value or weight is assigned that correlates to the extensiveness of the changes or the relationship between the action as suggested and the action as implemented. In another embodiment, a value or weight is assigned to any suggested action of the policy that is completely disregarded by the user.
  • In one embodiment, ITI takes into account modifications of policy recommended actions by the user and is expressed by the equation ITI=ƒ(m1,m2, . . . ,mn) where m1, m2, . . . ,mm are weights assigned to each on of n different user modifications. The function ITI=ƒ(m1,m2, . . . ,mn) is normalized such that 0≦ITI≦1.
  • In one embodiment, the corresponding OTI for this user-modification based ITI is expressed by the equation OTI=ƒ1(ITI1,ITI2, . . . ,ITIk) where ITI1,ITI2, . . . ,ITIk are the ITI's associated with each one of a plurality, k, executions of a given policy. In one embodiment, ƒ1( ) represents a moving or historical average and is normalized such that 0≦OTI≦1.
  • In one embodiment for a group of policies G, OTI(G) is represented as a weighted average of the OTI's for each policy that is a member of the group of policies G. This weighted average is represented as OTI(G)=(w1OTI1+w2OTI2+ . . . +w1OTI1)/l, where wx is the weight assigned to each member policy in the group G containing l different polices and is normalized such that 0≦0TI(G)≦1.
  • In one embodiment trust, either ITI or OTI is represented as a number between 0 and 1. Alternatively, trust is defined as an aggregate of its individual attributes, for example reliability, competence and dependability. These attributes can be user-defined. Each of these attributes is measured individually due to different application requirements. Important information could potentially be lost if these various aspects are combined or reduced into a single scalar quantity. In addition, if the number of users and policies involved exceeds a certain threshold, interactions among the various aspects can be difficult to coordinate.
  • An exemplary embodiment of a policy system in combination a trust component 20 in accordance with the present invention is illustrated in FIG. 2. The combination policy system and trust building tools includes a policy system 22 in combination with a decision-modifying trust component 24. Suitable policy systems include any type of policy system known and available in the art. For example, the policy system enforces policies and business rules in accordance with a pre-determined ranking system.
  • The decision-modifying trust component provides the calculation and application of ITI and OTI with respect to a given policy or group of policies applied by the policy system. The decision-modifying trust component includes an initial trust decision 26 for each policy or group of policies that are assigned an OTI. In one embodiment, the initial trust decision is performed automatically based upon an associated ITI or OTI. Alternatively, the initial decision is performed manually by the user by placing the system into a desired trust mode at will on a per-policy basis. Whether the initial decision is performed automatically or manually, the combined system is placed into one of a plurality of trust modes. As illustrated, three different trust modes are possible, minimal trust or supervised mode 34, partial trust or modify mode 30, and full trust or automatic mode 28. Although illustrated with three trust level modes, systems in accordance with the present invention can have more or less than three trust modes. Having more than three trust modes provides greater fine tuning of the desired trust mode. The selected trust mode determines how the actions chosen by the policy system are executed.
  • In the full trust mode 28, the actions recommended by the policy system pass through to a final decision 36 without active input or modification from the user or system administrator. The final decision then implements those actions 42. In the minimum trust mode 24 and the partial trust mode 30, user modifications 32 are made to the actions recommended by the policy system, and the modified actions are forwarded to the final decision 36 system for implementation as modified. In addition, the final decision system 36 reports the details of any changes or modifications to the recommended actions, together with the conditions under which such modifications were made, to a knowledge base (KB) 38 in communication with the final decision system 36. The modifications and conditions are recorded by the KB, and the KB uses these data to generate and update the appropriate ITI's and OTI's, which are stored in a trust index database 40. In one embodiment, the KB also uses reinforcement learning algorithms to adjust the behavior of a given policy or set of policies to maximize the ITI or OTI. In one embodiment, a trust weighted value is assigned to each policy recommended action to maximize the likelihood of the policy being accepted by a user and to increase the overall trust level of the policy-based system. Therefore, the policy system 22 modifies its behavior so as to increase the level of user trust in that policy system.
  • The trust index database 40 is also in communication with the policy system. Therefore, for a given policy or set of polices the policy system creates subsequent policy-recommended actions having increased trust, preferably causing subsequent actions to progress up through the trust modes from minimum trust to full trust. In addition a monitoring system 44 for example a computer is provided to allow user monitoring and control of the system 20. In one embodiment, the monitoring system is used to display the determined level of trust for a given policy. The displayed level of trust is utilized by the user or administrator in selecting a given policy for use in governing the operational aspects of the application.
  • In one embodiment for new users or new policy systems, the policy-recommended actions will initially be handled in the minimum trust mode, because no trust has been established or built-up between the policy system and the user. In the minimum trust mode, the policy-based system 22 uses the prescribed policies to generate recommended actions. These actions, however, are not automatically executed. Instead, the user examines and reviews the recommended actions, for example using the monitoring system 44. The user can accept the recommended actions as recommended, propose modifications to the recommended actions, ignore the recommended actions and propose a separate set of actions or decide not to implement any actions.
  • Given the user defined modifications or disposition of the recommended actions, a level of trust value is assigned to the policy execution. For example, if for the execution of a given policy, the policy system recommended actions are accepted without modification, then the highest trust value is associated with that execution of the policy. For example, an ITI of 1 is assigned for this policy execution. Conversely, an ITI of 0 is assigned for the current policy execution if either all policy system-recommended actions are ignored by the user and completely replaced by the user-defined actions or no actions are implemented by the user. Otherwise, an ITI is assigned to the current policy execution as specified by a pre-determined function of the amount of modification. This function takes into account parameters that describe the quality and quantity of the modifications including the number of modifications, type of modifications and extent of modifications. The functions can express linear or higher order relationships between the modifications and the assigned ITI value. In addition to evaluating the type and quantity of the modifications, user provided explanation of the modifications can be provided and considered in determining and appropriate ITI.
  • Therefore, for a given policy, the combination policy-based system and trust component works to increase the ITI of each policy so that the overall OTI for the given operational trust mode evolves toward the highest level of trust, which is represent by the OTI value of about 1. Having approached the highest trust level value, the operational trust mode of the system is elevated to the next level either manually by the user or automatically for a given policy. For example, the operational trust mode can be elevated from minimum trust to partial trust. However, at this new higher level trust mode, the level of trust in any given policy is relatively low, because there is no historical record or experience in operating the policy at the higher and more relaxed trust mode. Therefore, the ITI associated with the next policy is adjusted to express this relatively minimum level of trust in the policy in the current trust mode. In one embodiment, the ITI is set at about 0.
  • Referring to FIG. 3, graphical illustration 46 of the trust index 48, i.e. ITI, versus time 49 is illustrated for a given policy 50 is illustrated. The graphical illustration provides a graphical history of ITI over time for a particular policy, illustrating the long-term trust pattern of a policy. The ITI varies overtime between about 0 and about 1, which are the defined boundaries for the functions that express the trust index. The plot 54 increases over time as the level of trust increases for the policy at a given trust mode. When the trust mode is changed or increased there is an associate decrease in trust index. The general trend, however, is for the trust index value to increase over time towards the value of 1.
  • In the partial trust mode, user modifications to the policy-recommended actions are mode. In one embodiment, unlike the modifications made in the minimal trust mode, user modifications of the recommended actions when in the partial trust mode are limited. In one embodiment, the recommended actions themselves cannot be modified or deleted by the user, and only the parameters to those actions can be modified. At this trust mode, since the actions themselves are not modified, review and adjustment of the recommended actions parameters can be handled by less expert users, because the balance of the rule has been delegated to the policy system. As in the minimum trust mode, the ITI for a given execution of the policy is computed based on the quality and quantity of changes. If recommended actions are accepted and applied unchanged, the ITI is 1. If modifications to the recommended actions are made, the ITI is assigned an amount specified by a pre-determined or expert-defined function of the amount of modification.
  • As the policy system evolves to a point where the OTI is sufficiently close to 1, the trust operating mode for a given policy can be adjusted upwards again to the next higher level of trust, i.e. the full trust mode. This adjustment can be made either automatically or manually. At the full trust mode, the user has relatively strong confidence in the policy and the policy system. When running in full trust mode, modifications to the recommended policy actions are not made. However, the system continues to monitor the overall OTI, and if the OTI falls below a pre-defined critical level, the policy system can revert to lower level trust modes for a given policy.
  • In the full trust or automatic mode, the policy system is given full authority to define and implement the actions for a particular policy without user intervention. User review of the executed actions, however, can still be provided. In one embodiment, a summary is generated for each policy execution, and the user examines this summary periodically. Based upon the examination, the user can decide whether or not to leave the system in full trust mode or to switch the system back to the partial trust mode or the minimal trust mode for a particular policy. Absent intervention from the user, an ITI of 1 is awarded for each policy execution. If the user decides to switch back to other modes of operation, ITI's of 0 are assigned, either for all policies, or if records suffice, for the policies which the user decided were unreliable, in sufficient numbers to drive the OTI for each policy to a level typical of the mode of operation to which the user switches the system. An OTI that is sufficiently close to 1 indicates that the user trusts the policy (and the policy system) to a high degree. In this phase of the operation, the user periodically examines the summary and allows the policy system to run autonomously.
  • In addition to the trust building tools described above, exemplary systems in accordance with the present invention can utilize more advanced learning techniques to modify system behavior, for example based upon the actions of the user in response to suggested actions, in order to obtain the trust of the user, e.g. to increase the OTI's. A variety of reinforcement learning algorithms can be used. Suitable available reinforcement techniques are described in L. P. Kaelbling, M. Littman, A. Moore, “Reinforcement Learning: A Survey”, Journal of Artificial Intelligence Research, Volume 4, 1996,; which is incorporated herein by reference in its entirety.
  • Referring to FIG. 4, an exemplary embodiment of a reinforcement learning process as a feedback loop from information extracted from user interaction to the policy evaluation system 56 is illustrated. As illustrated, the policy evaluation system 58 generates policy decisions 60, for example in the form of recommended actions. In general, the recommended actions are selected so as to increase the level or trust between the user and the policy system. The recommended policy decisions may or may not be subject to user modifications 62, and a reinforcement learning system 64 monitors these modifications and provides an evaluation of these modifications back to the policy system in the form of a feedback loop 65. This feedback loop provides the evaluation of user modifications to the policy system for use in making policy decision recommendations. Therefore, the reinforcement learning evaluation is use to further increase the level of trust between the user and the policy system.
  • In one embodiment, a policy rule produces a set of recommended actions. In addition, new actions can be added by the system if the user overrides the recommended actions. Each recommended action has an associated action acceptance value (AAV) that is a number between 0 and 1. The AAV expresses the likelihood that a given recommended action will be accepted by the user. The AAV for each recommended action is adjusted through the reinforcement process so as to earn the highest possible reward from the user. For example, the policy system attempts to maximize the ITI by suggesting the actions with the highest AAV. A recommended action's AAV increases as it is selected by the user and decreases as it is deselected by the user.
  • In a data center serving multiple clients, for example, a load adjustment policy, which adjusts the loading of the information technology (IT) assets including servers, storage devices and switches based on client specified requirements and currently available assets, is running in minimum trust mode. The OTI is about 0.49 as calculated from 6 iterations of policy execution, and the threshold for advancing to the next trust mode is an OTI of ≧about 0.5. In response to a sudden increase in traffic across the network, the policy system recommends three actions, each action having an associated AAV. The first action is to deploy two additional servers. The second action is to increase buffer storage by 50% for certain group of clients, for example “GOLD” clients. The third action is to suspend processing of all batch jobs. Actions 1, 2, and 3 carry modification weights of 0.5, 0.3, and 0.2 respectively and AAV's of 0.9, 0.5, and 0.4 respectively. After examining the suggested actions, an administrator accepts actions 1 and 3 for execution. The ITI for this instance of policy execution is 0.7, where the ITI is the sum of the modification weights of each accepted action. This ITI is added to the computation of the OTI for the load adjustment policy, resulting in an OTI of 0.52, enabling advancement of the policy system to the partial trust mode. The AAV of action 2, which was not accepted, decreases to 0.4, and the AAV's of actions 1 and 3, which were accepted, increase to 1.0 and 0.5 respectively. This change in AAV's results in action 3 having a higher priority than action 2 as a candidates to be included in the recommended action list suggested by subsequent policy execution with similar conditions. Therefore, the policy system uses reinforcement learning to learn and to adjust actions to achieve a higher ITI. Therefore, systems and methods in accordance with exemplary embodiments of the present invention establish trust between the policy system and its user during active use of the policy system.
  • The present invention is also directed to a computer readable medium containing a computer executable code that when read by a computer causes the computer to perform a method for establishing and increasing trust between a user and a policy system in accordance with the present invention and to the computer executable code itself. The computer executable code can be stored on any suitable storage medium or database, including databases in communication with and accessible by any component used in accordance with the present invention, and can be executed on any suitable hardware platform as are known and available in the art.
  • While it is apparent that the illustrative embodiments of the invention disclosed herein fulfill the objectives of the present invention, it is appreciated that numerous modifications and other embodiments may be devised by those skilled in the art. Additionally, feature(s) and/or element(s) from any embodiment may be used singly or in combination with other embodiment(s). Therefore, it will be understood that the appended claims are intended to cover all such modifications and embodiments, which would come within the spirit and scope of the present invention.

Claims (30)

1. A method for incorporating user trust into a policy-based system, the method comprising:
identifying at least one policy governing operational aspects of an application;
determining a level of trust associated with using the identified policy to govern the operational aspects of the application; and
using the determined level of trust to select an operational trust state that defines a level of autonomy with which the policy operates to govern operational states of the application.
2. The method of claim 1, wherein the step of identifying at least one policy further comprises identifying a plurality of polices, each policy capable of governing operational aspects of the application, and the step of determining a level of trust further comprises determining a level of trust for all identified policies.
3. The method of claim 2, further comprising combining each level of trust from each identified policy into an overall trust level.
4. The method of claim 1, wherein the step of determining a level of trust further comprises determining a new level of trust upon each use of the policy to govern operational aspects of the application.
5. The method of claim 4, further comprising aggregating each new level of trust into an overall trust level.
6. The method of claim 5, wherein the step of using the determined level of trust further comprises using the overall trust level to dictate the operational trust state that defines the level of autonomy with which the policy operates to govern operational states of the application.
7. The method of claim 1, further comprising identifying at least one policy recommended action to affect the operational aspects of the application;
wherein the step of determining the level of trust further comprises identifying any modifications made to the policy recommended action prior to implementation of the policy recommended action to affect operational aspects of the application.
8. The method of claim 7, wherein the step of determining the level of trust further comprises calculating an instantaneous trust index using the identified modifications.
9. The method of claim 8, wherein the step of calculating the instantaneous trust index further comprises normalizing the instantaneous trust index to a value between zero and one.
10. The method of claim 7, wherein he step of identifying the modifications comprises identifying a quantity of modifications made, a quality of each modification made and combinations thereof.
11. The method of claim 7, further comprising identifying a plurality of policy recommended actions to affect the operational aspects of the application, wherein the step of determining the level of trust further comprises identifying a quantity of modifications made to the recommended actions prior to implementation, a quality of each modification made to the recommended actions prior to implementation, a quantity of recommended actions implemented without modification and a quantity of recommended actions discarded without being implemented, and calculating an instantaneous trust index using the identified modifications.
12. The method of claim 7, wherein the step of determining the level of trust further comprises identifying any modifications made to the policy recommended action prior to each one of a plurality of implementations of the policy recommended action to affect operational aspects of the application, and calculating a separate instantaneous trust index for each one of the plurality of implementations using the identified modifications associated with that modification.
13. The method of claim 12, further comprising calculating an overall trust index using all of the separate instantaneous indices.
14. The method of claim 13, further comprising normalizing the overall trust index to have a value between zero and one.
15. The method of claim 1, further comprising using reinforcement learning to maximize the level of trust.
16. The method of claim 1, wherein the step of using the determined level of trust to select an operational trust state further comprising selecting the operational trust state from a plurality of operational trust states.
17. The method of claim 16, wherein the step of selecting the operational trust state further comprises selecting the operational trust state from a fully supervised trust state, a partially modifiable trust state or an unsupervised full trust state.
18. The method of claim 1, further comprising increasing the selected operational trust state in response to an increase in the level of trust.
19. The method of claim 1, further comprising decreasing the operational trust state in response to a decrease in the level of trust.
20. The method of claim 1, associating the level of trust with the identified policy, and using the associated level as a parameter to determine when to select the identified policy to govern the operational aspects of the application.
21. The method of claim 1, wherein the step determining the level of trust comprises using user-defined criteria to determine the level of trust.
22. The method of claim 1, further comprising associating the determined level of trust with an operating mode of the policy-based system and modifying the level of trust by changing the operating mode.
23. The method of claim 7, further comprising assigning a trust weighted value to each policy recommended action to maximize a likelihood of the policy being accepted by a user and to increase an overall trust level of the policy-based system.
24 The method of claim 1, further comprising displaying the determine level of trust for the policy and using the displayed level of trust in selecting the policy for use in governing the operational aspects of the application.
25. A computer readable medium containing a computer executable code that when read by a computer causes the computer to perform a method for incorporating user trust into a policy-based system, the method comprising:
identifying at least one policy governing operational aspects of an application;
determining a level of trust associated with using the identified policy to govern the operational aspects of the application; and
using the determined level of trust to select an operational trust state that defines a level of autonomy with which the policy operates to govern operational states of the application.
26. The computer readable medium of claim 25, wherein the step of determining a level of trust further comprises determining a new level of trust upon each use of the policy to govern operational aspects of the application.
27. The computer readable medium of claim 26, further comprising aggregating each new level of trust into an overall trust level.
28. The computer readable medium of claim 25, further comprising identifying at least one policy recommended action to affect the operational aspects of the application;
wherein the step of determining the level of trust further comprises identifying any modifications made to the policy recommended action prior to implementation of the policy recommended action to affect operational aspects of the application.
29. The computer readable medium of claim 28, wherein the step of determining the level of trust further comprises calculating an instantaneous trust index using the identified modifications.
30. The computer readable medium of claim 29, wherein the step of calculating the instantaneous trust index further comprises normalizing the instantaneous trust index to a value between zero and one.
US11/145,775 2005-06-01 2005-06-06 System to establish trust between policy systems and users Abandoned US20060277591A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US11/145,775 US20060277591A1 (en) 2005-06-01 2005-06-06 System to establish trust between policy systems and users
US12/545,167 US7958552B2 (en) 2005-06-01 2009-08-21 System to establish trust between policy systems and users

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US68647105P 2005-06-01 2005-06-01
US11/145,775 US20060277591A1 (en) 2005-06-01 2005-06-06 System to establish trust between policy systems and users

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US12/545,167 Continuation US7958552B2 (en) 2005-06-01 2009-08-21 System to establish trust between policy systems and users

Publications (1)

Publication Number Publication Date
US20060277591A1 true US20060277591A1 (en) 2006-12-07

Family

ID=37495620

Family Applications (2)

Application Number Title Priority Date Filing Date
US11/145,775 Abandoned US20060277591A1 (en) 2005-06-01 2005-06-06 System to establish trust between policy systems and users
US12/545,167 Expired - Fee Related US7958552B2 (en) 2005-06-01 2009-08-21 System to establish trust between policy systems and users

Family Applications After (1)

Application Number Title Priority Date Filing Date
US12/545,167 Expired - Fee Related US7958552B2 (en) 2005-06-01 2009-08-21 System to establish trust between policy systems and users

Country Status (1)

Country Link
US (2) US20060277591A1 (en)

Cited By (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050044209A1 (en) * 2003-08-06 2005-02-24 International Business Machines Corporation Autonomic management of autonomic systems
US20070220125A1 (en) * 2006-03-15 2007-09-20 Hong Li Techniques to control electronic mail delivery
US20080183603A1 (en) * 2007-01-30 2008-07-31 Agiliarice, Inc. Policy enforcement over heterogeneous assets
US20080244690A1 (en) * 2007-04-02 2008-10-02 Microsoft Corporation Deriving remediations from security compliance rules
US20080313116A1 (en) * 2007-06-13 2008-12-18 Motorola, Inc. Parameterized statistical interaction policies
US20090077133A1 (en) * 2007-09-17 2009-03-19 Windsor Hsu System and method for efficient rule updates in policy based data management
US20090094677A1 (en) * 2005-12-23 2009-04-09 International Business Machines Corporation Method for evaluating and accessing a network address
US20090113062A1 (en) * 2007-10-31 2009-04-30 Cisco Technology, Inc. Efficient network monitoring and control
US20090193499A1 (en) * 2008-01-25 2009-07-30 Oracle International Corporation Method for application-to-application authentication via delegation
US20090199264A1 (en) * 2008-01-31 2009-08-06 Intuit Inc. Dynamic trust model for authenticating a user
US20090276204A1 (en) * 2008-04-30 2009-11-05 Applied Identity Method and system for policy simulation
US20120204222A1 (en) * 2009-10-16 2012-08-09 Nokia Siemens Networks Oy Privacy policy management method for a user device
US8516539B2 (en) * 2007-11-09 2013-08-20 Citrix Systems, Inc System and method for inferring access policies from access event records
US20130317941A1 (en) * 2012-05-17 2013-11-28 Nathan Stoll Trust Graph
US8683597B1 (en) * 2011-12-08 2014-03-25 Amazon Technologies, Inc. Risk-based authentication duration
US8910241B2 (en) 2002-04-25 2014-12-09 Citrix Systems, Inc. Computer security system
US8990910B2 (en) 2007-11-13 2015-03-24 Citrix Systems, Inc. System and method using globally unique identities
US8990573B2 (en) 2008-11-10 2015-03-24 Citrix Systems, Inc. System and method for using variable security tag location in network communications
US20150156185A1 (en) * 2013-12-04 2015-06-04 International Business Machines Corporation Trustworthiness of processed data
US9240945B2 (en) 2008-03-19 2016-01-19 Citrix Systems, Inc. Access, priority and bandwidth management based on application identity
US9799046B2 (en) 2012-05-17 2017-10-24 Wal-Mart Stores, Inc. Zero click commerce systems
US10181147B2 (en) 2012-05-17 2019-01-15 Walmart Apollo, Llc Methods and systems for arranging a webpage and purchasing products via a subscription mechanism
US10210559B2 (en) 2012-05-17 2019-02-19 Walmart Apollo, Llc Systems and methods for recommendation scraping
US10346895B2 (en) 2012-05-17 2019-07-09 Walmart Apollo, Llc Initiation of purchase transaction in response to a reply to a recommendation
US10580056B2 (en) 2012-05-17 2020-03-03 Walmart Apollo, Llc System and method for providing a gift exchange
US10769291B2 (en) 2017-06-12 2020-09-08 Microsoft Technology Licensing, Llc Automatic data access from derived trust level

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100180349A1 (en) * 2009-01-12 2010-07-15 Mahshad Koohgoli System and method of policy driven content development
US8312157B2 (en) * 2009-07-16 2012-11-13 Palo Alto Research Center Incorporated Implicit authentication
US8918834B1 (en) * 2010-12-17 2014-12-23 Amazon Technologies, Inc. Creating custom policies in a remote-computing environment
US11568236B2 (en) 2018-01-25 2023-01-31 The Research Foundation For The State University Of New York Framework and methods of diverse exploration for fast and safe policy improvement
US11004006B2 (en) * 2018-08-30 2021-05-11 Conduent Business Services, Llc Method and system for dynamic trust model for personalized recommendation system in shared and non-shared economy

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6052723A (en) * 1996-07-25 2000-04-18 Stockmaster.Com, Inc. Method for aggregate control on an electronic network
US6088801A (en) * 1997-01-10 2000-07-11 Grecsek; Matthew T. Managing the risk of executing a software process using a capabilities assessment and a policy
US20020026576A1 (en) * 2000-08-18 2002-02-28 Hewlett-Packard Company Apparatus and method for establishing trust
US6785728B1 (en) * 1997-03-10 2004-08-31 David S. Schneider Distributed administration of access to information
US6854016B1 (en) * 2000-06-19 2005-02-08 International Business Machines Corporation System and method for a web based trust model governing delivery of services and programs
US20050210448A1 (en) * 2004-03-17 2005-09-22 Kipman Alex A Architecture that restricts permissions granted to a build process
US20060168022A1 (en) * 2004-12-09 2006-07-27 Microsoft Corporation Method and system for processing a communication based on trust that the communication is not unwanted as assigned by a sending domain
US7086085B1 (en) * 2000-04-11 2006-08-01 Bruce E Brown Variable trust levels for authentication
US7233935B1 (en) * 2004-04-16 2007-06-19 Veritas Operating Corporation Policy-based automation using multiple inference techniques

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4251466B2 (en) * 1998-12-04 2009-04-08 富士通株式会社 Automation level adjusting device, automation level adjusting method, and automation level adjusting program recording medium
US7370098B2 (en) * 2003-08-06 2008-05-06 International Business Machines Corporation Autonomic management of autonomic systems

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6052723A (en) * 1996-07-25 2000-04-18 Stockmaster.Com, Inc. Method for aggregate control on an electronic network
US6088801A (en) * 1997-01-10 2000-07-11 Grecsek; Matthew T. Managing the risk of executing a software process using a capabilities assessment and a policy
US6785728B1 (en) * 1997-03-10 2004-08-31 David S. Schneider Distributed administration of access to information
US7086085B1 (en) * 2000-04-11 2006-08-01 Bruce E Brown Variable trust levels for authentication
US6854016B1 (en) * 2000-06-19 2005-02-08 International Business Machines Corporation System and method for a web based trust model governing delivery of services and programs
US20020026576A1 (en) * 2000-08-18 2002-02-28 Hewlett-Packard Company Apparatus and method for establishing trust
US20050210448A1 (en) * 2004-03-17 2005-09-22 Kipman Alex A Architecture that restricts permissions granted to a build process
US7233935B1 (en) * 2004-04-16 2007-06-19 Veritas Operating Corporation Policy-based automation using multiple inference techniques
US20060168022A1 (en) * 2004-12-09 2006-07-27 Microsoft Corporation Method and system for processing a communication based on trust that the communication is not unwanted as assigned by a sending domain

Cited By (43)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8910241B2 (en) 2002-04-25 2014-12-09 Citrix Systems, Inc. Computer security system
US9781114B2 (en) 2002-04-25 2017-10-03 Citrix Systems, Inc. Computer security system
US20050044209A1 (en) * 2003-08-06 2005-02-24 International Business Machines Corporation Autonomic management of autonomic systems
US7370098B2 (en) * 2003-08-06 2008-05-06 International Business Machines Corporation Autonomic management of autonomic systems
US8201259B2 (en) * 2005-12-23 2012-06-12 International Business Machines Corporation Method for evaluating and accessing a network address
US20090094677A1 (en) * 2005-12-23 2009-04-09 International Business Machines Corporation Method for evaluating and accessing a network address
US20070220125A1 (en) * 2006-03-15 2007-09-20 Hong Li Techniques to control electronic mail delivery
US8341226B2 (en) * 2006-03-15 2012-12-25 Intel Corporation Techniques to control electronic mail delivery
US20080183603A1 (en) * 2007-01-30 2008-07-31 Agiliarice, Inc. Policy enforcement over heterogeneous assets
US20080244690A1 (en) * 2007-04-02 2008-10-02 Microsoft Corporation Deriving remediations from security compliance rules
US8533841B2 (en) * 2007-04-02 2013-09-10 Microsoft Corporation Deriving remediations from security compliance rules
US7818271B2 (en) 2007-06-13 2010-10-19 Motorola Mobility, Inc. Parameterized statistical interaction policies
WO2008157052A1 (en) * 2007-06-13 2008-12-24 Motorola, Inc. Parameterized statistical interaction policies
US20080313116A1 (en) * 2007-06-13 2008-12-18 Motorola, Inc. Parameterized statistical interaction policies
US20090077133A1 (en) * 2007-09-17 2009-03-19 Windsor Hsu System and method for efficient rule updates in policy based data management
US20090113062A1 (en) * 2007-10-31 2009-04-30 Cisco Technology, Inc. Efficient network monitoring and control
US8195815B2 (en) * 2007-10-31 2012-06-05 Cisco Technology, Inc. Efficient network monitoring and control
US8516539B2 (en) * 2007-11-09 2013-08-20 Citrix Systems, Inc System and method for inferring access policies from access event records
US8990910B2 (en) 2007-11-13 2015-03-24 Citrix Systems, Inc. System and method using globally unique identities
US20090193499A1 (en) * 2008-01-25 2009-07-30 Oracle International Corporation Method for application-to-application authentication via delegation
US8510796B2 (en) * 2008-01-25 2013-08-13 Oracle International Corporation Method for application-to-application authentication via delegation
US20090199264A1 (en) * 2008-01-31 2009-08-06 Intuit Inc. Dynamic trust model for authenticating a user
US8635662B2 (en) * 2008-01-31 2014-01-21 Intuit Inc. Dynamic trust model for authenticating a user
US9240945B2 (en) 2008-03-19 2016-01-19 Citrix Systems, Inc. Access, priority and bandwidth management based on application identity
US8943575B2 (en) 2008-04-30 2015-01-27 Citrix Systems, Inc. Method and system for policy simulation
US20090276204A1 (en) * 2008-04-30 2009-11-05 Applied Identity Method and system for policy simulation
US8990573B2 (en) 2008-11-10 2015-03-24 Citrix Systems, Inc. System and method for using variable security tag location in network communications
US9794268B2 (en) * 2009-10-16 2017-10-17 Nokia Solutions And Networks Oy Privacy policy management method for a user device
US20120204222A1 (en) * 2009-10-16 2012-08-09 Nokia Siemens Networks Oy Privacy policy management method for a user device
US9015485B1 (en) * 2011-12-08 2015-04-21 Amazon Technologies, Inc. Risk-based authentication duration
US8683597B1 (en) * 2011-12-08 2014-03-25 Amazon Technologies, Inc. Risk-based authentication duration
US10181147B2 (en) 2012-05-17 2019-01-15 Walmart Apollo, Llc Methods and systems for arranging a webpage and purchasing products via a subscription mechanism
US20130317941A1 (en) * 2012-05-17 2013-11-28 Nathan Stoll Trust Graph
US9799046B2 (en) 2012-05-17 2017-10-24 Wal-Mart Stores, Inc. Zero click commerce systems
US9875483B2 (en) 2012-05-17 2018-01-23 Wal-Mart Stores, Inc. Conversational interfaces
US10210559B2 (en) 2012-05-17 2019-02-19 Walmart Apollo, Llc Systems and methods for recommendation scraping
US10346895B2 (en) 2012-05-17 2019-07-09 Walmart Apollo, Llc Initiation of purchase transaction in response to a reply to a recommendation
US10580056B2 (en) 2012-05-17 2020-03-03 Walmart Apollo, Llc System and method for providing a gift exchange
US10740779B2 (en) 2012-05-17 2020-08-11 Walmart Apollo, Llc Pre-establishing purchasing intent for computer based commerce systems
US9571505B2 (en) * 2013-12-04 2017-02-14 International Business Machines Corporation Trustworthiness of processed data
US10063563B2 (en) 2013-12-04 2018-08-28 International Business Machines Corporation Trustworthiness of processed data
US20150156185A1 (en) * 2013-12-04 2015-06-04 International Business Machines Corporation Trustworthiness of processed data
US10769291B2 (en) 2017-06-12 2020-09-08 Microsoft Technology Licensing, Llc Automatic data access from derived trust level

Also Published As

Publication number Publication date
US7958552B2 (en) 2011-06-07
US20090307747A1 (en) 2009-12-10

Similar Documents

Publication Publication Date Title
US7958552B2 (en) System to establish trust between policy systems and users
US11656915B2 (en) Virtual systems management
US10942781B2 (en) Automated capacity provisioning method using historical performance data
US8140682B2 (en) System, method, and apparatus for server-storage-network optimization for application service level agreements
US7552152B2 (en) Risk-modulated proactive data migration for maximizing utility in storage systems
US20080155386A1 (en) Network discovery system
Haratian et al. An adaptive and fuzzy resource management approach in cloud computing
US11042410B2 (en) Resource management of resource-controlled system
US11777949B2 (en) Dynamic user access control management
Hussain et al. Profile-based viable service level agreement (SLA) violation prediction model in the cloud
US11949737B1 (en) Allocation of server resources in remote-access computing environments
Mogouie et al. A novel approach for optimization auto-scaling in cloud computing environment
Kumar et al. Performance based Risk driven Trust (PRTrust): On modeling of secured service sharing in peer-to-peer federated cloud
US7370098B2 (en) Autonomic management of autonomic systems
Hussain et al. A user-based early warning service management framework in cloud computing
Chan et al. How can we trust an autonomic system to make the best decision?
EP2381393A1 (en) A method of reinforcement learning, corresponding computer program product, and data storage device therefor
Chan et al. How can we trust a policy system to make the best decision?
Choudhary et al. Energy-efficient fuzzy-based approach for dynamic virtual machine consolidation
Mury et al. Task distribution models in grids: towards a profile‐based approach
KR102607460B1 (en) A auto scaling method and auto scaling system based on horizontal scale control
Nassif et al. Resource selection in grid: a taxonomy and a new system based on decision theory, case‐based reasoning, and fine‐grain policies
Jimenez et al. Resource Allocation of Multi-User Workloads in Cloud and Edge Data-Centers Using Reinforcement Learning
Soveizi Information Systems Group, University of Groningen, Groningen, The Netherlands {n. soveizi, d. karastoyanova}@ rug. nl
Frenzel et al. A fuzzy, utility-based approach for proactive policy-based management

Legal Events

Date Code Title Description
AS Assignment

Owner name: IBM CORPORATION, NEW YORK

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:ARNOLD, WILLIAM C.;CHAN, HOI YEUNG;SEGAL, ALLA;AND OTHERS;REEL/FRAME:016560/0334;SIGNING DATES FROM 20050719 TO 20050720

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO PAY ISSUE FEE