WO2009139650A1 - A data obfuscation system, method, and computer implementation of data obfuscation for secret databases - Google Patents

A data obfuscation system, method, and computer implementation of data obfuscation for secret databases Download PDF

Info

Publication number
WO2009139650A1
WO2009139650A1 PCT/NZ2009/000077 NZ2009000077W WO2009139650A1 WO 2009139650 A1 WO2009139650 A1 WO 2009139650A1 NZ 2009000077 W NZ2009000077 W NZ 2009000077W WO 2009139650 A1 WO2009139650 A1 WO 2009139650A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
obfuscating
obfuscation
values
granularity
Prior art date
Application number
PCT/NZ2009/000077
Other languages
French (fr)
Inventor
Andrew John Cardno
Ashok Kumar Singh
Original Assignee
Business Intelligence Solutions Safe B.V.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Business Intelligence Solutions Safe B.V. filed Critical Business Intelligence Solutions Safe B.V.
Priority to US12/992,513 priority Critical patent/US9305180B2/en
Publication of WO2009139650A1 publication Critical patent/WO2009139650A1/en
Priority to US15/183,449 priority patent/US20160365974A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L9/00Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols
    • H04L9/08Key distribution or management, e.g. generation, sharing or updating, of cryptographic keys or passwords
    • H04L9/0816Key establishment, i.e. cryptographic processes or cryptographic protocols whereby a shared secret becomes available to two or more parties, for subsequent use
    • H04L9/0819Key transport or distribution, i.e. key establishment techniques where one party creates or otherwise obtains a secret value, and securely transfers it to the other(s)
    • H04L9/0822Key transport or distribution, i.e. key establishment techniques where one party creates or otherwise obtains a secret value, and securely transfers it to the other(s) using key encryption key
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/62Protecting access to data via a platform, e.g. using keys or access control rules
    • G06F21/6218Protecting access to data via a platform, e.g. using keys or access control rules to a system of files or objects, e.g. local or distributed file system or database
    • G06F21/6227Protecting access to data via a platform, e.g. using keys or access control rules to a system of files or objects, e.g. local or distributed file system or database where protection concerns the structure of data, e.g. records, types, queries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/62Protecting access to data via a platform, e.g. using keys or access control rules
    • G06F21/6218Protecting access to data via a platform, e.g. using keys or access control rules to a system of files or objects, e.g. local or distributed file system or database
    • G06F21/6245Protecting personal data, e.g. for financial or medical purposes
    • G06F21/6254Protecting personal data, e.g. for financial or medical purposes by anonymising data, e.g. decorrelating personal data from the owner's identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L9/00Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols
    • H04L9/06Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols the encryption apparatus using shift registers or memories for block-wise or stream coding, e.g. DES systems or RC4; Hash functions; Pseudorandom sequence generators
    • H04L9/065Encryption by serially and continuously modifying data stream elements, e.g. stream cipher systems, RC4, SEAL or A5/3
    • H04L9/0656Pseudorandom key sequence combined element-for-element with data sequence, e.g. one-time-pad [OTP] or Vernam's cipher
    • H04L9/0662Pseudorandom key sequence combined element-for-element with data sequence, e.g. one-time-pad [OTP] or Vernam's cipher with particular pseudorandom sequence generator

Definitions

  • a data obfuscation system, method, and computer implementation via software or hardware, are provided that allows a legitimate user to gain access via a query to data of sufficient granularity to be useful while maintaining the confidentiality of sensitive information about individual records.
  • the data obfuscating system and method is particularly applicable to databases.
  • Data are numbers, characters, images or other outputs from devices that convert physical quantities into symbols.
  • Data can be stored on media, as in databases, numbers can be converted into graphs and charts, which again can be stored on media or printed.
  • a database that contains sensitive or confidential information stores data and therefore it is secured - by encryption using a public key, by encryption using a changing public key, in which case the data is held secure while the public key is changed, or by restricting access to it by the operating system.
  • a database (DB) application must protect the confidentiality of sensitive data and also must provide reasonably accurate aggregates that can be used for decision making.
  • One approach to achieve this goal is to use a statistical database system (SDB).
  • SDB allows users to access aggregates for subsets of records; the database administrator (DBA) sets a minimum threshold rule on the size of the subset for which aggregates can be accessed. As an example, in an SDB, if a query returns less than or equal to 89 records, then no information is provided to the user for such a query.
  • a database that obfuscates data is conventionally known as a secret database.
  • a secret database is ideally efficient (stores the data in an efficient manner with minimal overhead), provides a query language (e.g., SQL) interface, is repeatable, i.e., it returns identical results for identical queries, and protects the confidentiality of individual records.
  • SQL query language
  • a secret database may be implemented in a parallel fashion as in a parallel set of query pre and post filters. These may be implemented as distributed hardware components given this ability for the obfuscation to be built to handle very large databases and run the queries against the database in a distributed way.
  • Cox (1980) considered the problem of statistical disclosure control for aggregates or tabulation cells and discussed cell suppression methodology under which all cells containing sensitive information are suppressed from publication.
  • Duncan and Lambert (1986) used Bayesian predictive posterior distributions for the assessment of disclosure of individual information, given aggregate data.
  • Duncan and Mukherjee (2000) considered combining query restriction and data obfuscating to thwart stacks by data snoopers.
  • Chowdhury et al. (1999) have developed two new matrix operators for confidentiality protection.
  • the fitted MLR equations may provide poor prediction, which in turn will lead to very poor resolution in the obscured values
  • Muralidhar et al. (1999) developed a method for obfuscating multivariate data by adding a random noise to value data; their method preserves the relationships among the variables, and the user is given access to perturbed data.
  • One potential problem with this approach is that a query may not be repeatable, i.e., identical queries may produce random outputs, in which case one can get very close to the true values by running identical queries a large number of times.
  • a method of obfuscation in which a standard data query may be submitted to a secret database with output data being obfuscated.
  • a method of obfuscating data so that output values of a data request are obfuscated in a repeatable manner, via the use of an Obfuscating Function (OF) whilst maintaining the amount of obfuscation within a range so that the transformed values provide to a user information of a prescribed level of granularity.
  • OF Obfuscating Function
  • obfuscating data comprising: running an unconstrained query on data in a secret database to produce output data; and obfuscating the output data using a repeatable obfuscation function to return obfuscated data in response to the query.
  • an obfuscation system comprising: an input interface; a query engine for receiving input data from the input interface; memory interfaced to the query engine for storing data and supplying data to the query engine in response to a data request; an output interface configured to receive output data from the query engine; and an obfuscation engine for obfuscating data retrieved from memory and obfuscating it in a repeatable manner prior to supplying it to the output interface.
  • an obfuscation system comprising a database, an obfuscation circuit and a user interface wherein the obfuscation circuit operates according to the method and an obfuscation circuit adapted to interface between a database and a user interface which obfuscates data values returned from a database in response to a user query operating in accordance with the method.
  • Figure 1 shows a system for obfuscating data
  • Figure 2 A shows a method for obfuscating data based on addition of random noise to the data values
  • Figure 2B shows a method for obfuscating data based on not adding any random noise to the data values
  • Figure 3 shows a method for obfuscating data using regression
  • Figure 4 shows a histogram of the synthetic data for the regression obfuscating of data in a first example
  • Figure 5 shows a method for obfuscating data according to a third embodiment
  • Figure 12 shows a histogram of values in a synthetic database based upon 49 equal intervals
  • Figure 13 shows a histogram of values in a synthetic database based upon 1000 equal intervals. It can be seen from Figure 2 that even when a large number of equal class intervals is used, the frequency table has very poor resolution (especially for small values) and is not suitable for data obscuring;
  • Figure 14 shows a histogram of values in a synthetic database based upon unequal intervals; the x-axis in this figure is not to scale;
  • Figure 15 shows a graph of perturbed mid-points (Y) vs. true mid-points (M) of class intervals to which the weighted least squares line is fit;
  • Figure 16 shows a histogram of values in the synthetic database with 62 equal class intervals
  • Figure 17 shows a histogram of values in the synthetic database with 1000 equal class intervals
  • Figure 18 shows a histogram of values in the synthetic database with 67 unequal class intervals.
  • Figure 19 shows a system for implementing an obfuscation method.
  • the system and method for obfuscating data described is particularly applicable to a database and a query language, and more particularly to a structured query language (SQL) database and it is in this context that the system and method for obfuscating data will be described. It will be appreciated, however, that the system and method for obfuscating data has greater utility since it can be used to obscure any type of data however that data may be stored or generated.
  • SQL structured query language
  • Figure 1 illustrates a system for obfuscating data wherein the obscured data may be stored in a secret database, such as a well known structured query language (SQL) database.
  • the system may include an obfuscating unit/function that obscures the underlying data 25 in the database.
  • the obfuscation engine 24 may be implemented as one or more lines of computer code (that may or may not be part of a database management system software) that implement the different processes for obfuscating the data as described below in each of the embodiments.
  • the obfuscating unit/function also may be implemented as a hardware device/circuit or as a combination of hardware and software and the system and method is not limited to any particular implementation of the obfuscating unit/function.
  • Figure 19 shows a computer system in which a query is received by input interface 10 and a communications handler 11 passes the query to an input filter 12 to ensure that the query is of a permitted type.
  • the input filter 12 may ensure that the granularity of the query matches the granularity of the obfuscation; in general it is necessary to ensure that the boundary conditions on the filters are correct for obfuscation, for example on random rounding the boundaries of filters may be random rounded before the query is processed or on statistical adjustment the input boundaries may be statistically adjusted before the query is executed.
  • This pre-process may be used to limit the final aggregation of the data, hi addition this filter may be used to stop repeated queries that may be persistent access attempts to flood the obfuscation method.
  • the query is then executed by query engine 13.
  • Query engine 13 may apply filters on the queries, for example filtering our queries that restrict to very fine ranges or queries that have a very limited subset of data. This filtering may be applied by the individual selection criteria clause or to the whole of the query. Filtering includes methods such as applying joins or other functional criteria, which then retrieve data for the query from database 14.
  • the output data from query engine 13 is then obfuscated by an obfuscation engine 15.
  • the output of obfuscation engine 15 passes through output filter 16.
  • the output filter 16 may apply a restriction to the output values using a linear function or a measured threshold value.
  • the output filter may count the number of the frequency of the base data being filtered - for example a simple linear filter filtering records which are constructed of less than X records or a filter based on the variation in the data such that the results must be within certain statistical measures of each other, for example the base data can not vary by more than one standard deviation of the mean.
  • Output filter 16 may also perform additional post query filtering such as distributing results based on filtering (e.g. distributing email, storing output data to different storage media etc.), additional filtering (i.e. only delivering results satisfying a data field criteria) etc.
  • Output filter 16 may include a programmable streaming data processor. The output from output filter 16 may be output by output interface 17 for use by a user or other device.
  • components of the system may be implemented in hardware or software. Each of these processes may be run either singularly or in parallel. When the processes are run in parallel the results of the parallel process may be the same or similar independent of which process executed the request. However, it may be advantageous for the obfuscation engine to be implemented as a specific circuit to ensure obfuscation of all output data. Although the obfuscation engine is shown as a single engine it will be appreciated that data obfuscation may be performed by multiple nodes of a clustered computer system. The data may be abstract data and/or data obtained from real world sensors etc. The output data may drive a display, printer, or some real world device (e.g.
  • obfuscation relating to tracking of moveable objects the results of this tracking may be presented in aggregate such that obfuscation can be applied to enable output of information such as the number of vehicles or people or vessels in area without the recipients of said information being able to determine the specific time and or place an event or movement occurred.
  • a method of obfuscating data including random rounding, especially a mapping based method based on the last digit.
  • a mapping based method based on the last digit.
  • x 0, 1, 2, ..., 8, 9.
  • This method loses information but retains the consistency of the data.
  • Another example is to map all odd frequencies to round down and even frequencies to round up.
  • a method of obfuscating data that is based upon a (Pseudo) random number seeding function which will seed random number generator with input value(s), with output either shifted up or down a number of granularity factors.
  • the seeding function could be taken from some external measure, such as the last digit of the temperature or the time at any point, if repeatability is required it can be held secret and entered by human.
  • the data may be obscured based upon random rounding and least squares regression involving one or more predictor variables.
  • the data may be obscured based upon random rounding of frequency data and simple linear regression of value data.
  • the data may be obscured using principal components regression (PCR) wherein the data may be multivariate data.
  • PCR principal components regression
  • each point may be given equal weight, a weighting inversely proportional to its value or a weighting inversely proportional to the mid-point of the class interval and directly proportional to the frequency of the class interval.
  • the Obfuscation Function may minimize Total Weighted Error.
  • the Total Weighted Error may be the Sum of Squared Errors, the Sum of Absolute Errors, the Sum of Squared Relative Errors or the Sum of Absolute Relative Errors.
  • the method may be matrix algebra based, based upon computer search or based upon neural networks. Random errors may be added to true values to form a dependent variable y for weighted regression, using the true data value as the independent variable.
  • Weighted regression may also be based upon using the true data value as the dependent variable, some function of the true data value as the independent variable, and fitting a regression equation. Alternatively, weighted regression may also be based upon using the true data value as the dependent variable, some function of the true data value and values in other columns as the set of independent variables, and fitting a regression equation.
  • Weighted regression may be applied to a subset of records obtain an obfuscating function. Weighted regression may include selecting a subset of records which is a k-point summary of values, on which weighted regression is performed to get an obfuscating function.
  • Obfuscation may be implemented in one of many ways as follows:
  • Each value column in the database can be obfuscated, and then the obfuscated values can be stored in another database; a user may be given access to this obfuscated database.
  • value data can be obfuscated in response to a data request or a query from a user.
  • obfuscated data When the information requested by a user is in the form of a graph or chart, visual display is provided using obfuscated data.
  • the obfuscated data may be used in data visualization systems including geographical information systems and classical graphing systems.
  • the calculations for the obfuscation method can be made within a desktop environment, or on a computer cluster.
  • a distributed database which is a database that resides on storage devices that are not all attached to a common CPU but to multiple computers, which may or may not be located in the same physical location
  • calculations for data obfuscation can be performed in various nodes of a clustered computer.
  • the calculation of frequency in a computer program can be performed using integer type, in which the integer type may be stored in 2, 4, 8, 16, 32, 62, 128, 256, 512 or 1024 bytes in memory.
  • Data can be obfuscated as anticipated, and then pre-calculated; the obfuscated values can be stored for user access.
  • a query for example an SQL query, may often restrict the data, typically by using the WHERE clause of the SQL statement. These statements look like "where X ⁇ 10" OR "where X > 1000". In this case if the database had only one person with X> 1000 then the result set would contain several small values which could be filtered.
  • a request for an aggregate can be handled in two ways: (a) the methods of obfuscation can be applied on value data for which the aggregate is requested, and then the aggregate is computed from the obfuscated data, and (b) the aggregate is first computed based on the true values in the database, and then the aggregate is obfuscated before returning an output to the user.
  • Restrictions can be applied to the data relating to the granularity of the frequency data. If the granularity of the result set is 90 records, then it might be necessary to only allow filters in multiples of 90, in other words 90, 180, 270, 360, 450...
  • the obfuscation amount could be altered in response to a number of factors including security level or frequency of query or granularity of the output data. For example if a query sums all data in the database it will not need obfuscation. This would also be an efficiency application. This would need to be applied to each sub clause in the "Where" statement. 10.
  • the amount of obfuscation can be made to depend upon (a) different use access rights, (b) different levels of frequency of output aggregation. As an example, the output in response to a user with low access rights may have a larger amount of obfuscation than that for a user with high access rights. An example, the output in response to a query with frequency of 110 may have a larger amount of obfuscation than a query with frequency of 505.
  • the obfuscated values may be used in the calculation of an index to the obfuscated data.
  • An index on a database such as a hash or B-Tree index are pre calculated lookups so queries are faster.
  • the implementation would need to build indexes on the obfuscated values.
  • the original data can be stored in a secured format, for example, in an encrypted database, where the access to the data is restricted by the operating system.
  • the encryption can be done using a public key encryption method, which may be changed in a secure manner.
  • the encryption can be applied to the output data as well.
  • the system and method of this invention produces obfuscated data.
  • the output data can be used for further calculations in many ways and for many purposes: to compute statistical summary of database, to perform some heuristic calculations on values in the database, to prepare graphs in a graphic software package or a geographic information system.
  • the obfuscated data can of course be printed, or stored on any media for further computations, or even encrypted.
  • the integrity check value is implemented with a technology selected from the group consisting of: CRC (cyclic redundancy check), hash, MD5, SHA-I, SHA-2, HMAC (keyed-hash message authentication code), partial-hash- value and parity checks. If the solution is implemented as a middleware layer the communications between the database cluster and the middleware are integrity checked.
  • the obfuscating unit/function may use a least squares regression for obfuscating the base data and random rounding is then applied to output from regression to further reduce resolution.
  • this embodiment uses a method of least squares regression in a computationally efficient way that allows the user to control the amount of obfuscating while maintaining repeatability of a query.
  • the obfuscating unit in this embodiment provides a database application (DBA) such as a database management application with the flexibility of obfuscating data with or without adding a random noise to the values in the database, is easier to implement as it is based on regression which may be applied to all N records in the database in case N is moderate and to a subset of records in the database in case N is extremely large, and will yield identical outputs to identical queries, even if the value data is perturbed by adding a random noise.
  • DBA database application
  • the obfuscating of value data is done by performing weighted regression.
  • Weights for data points used in regression can be chosen in any one of two ways:
  • a linear model is fitted to a dependent variable 7 as a function of predictor variables Xi, X2, .... X p; where /? can equal 1 (in which case obfuscating is done on one column) orp can be greater than 1 (in which case several columns need to be obscured at once).
  • the unknown parameters a and b are chosen so as to minimize a measure of departure from the model, which we will refere to as the
  • Total Weighted Error can be defined in one of several ways:
  • the variable x TRUE is used as a predictor or as a dependent variable, depending upon the obfuscating method used, where X TRUE , * s me true vau ⁇ e of the data in record /, and the variables (Xjj, X 24 , ..., X p , ⁇ ) can be chosen in one of several ways depending upon the following:
  • Y 1 X TRUEti + e, where e, is random error with mean 0 and variance ⁇ 2 ,as shown in Figure 2a; or
  • This fitted line is used as the output of an SQL query.
  • the m predictors can be taken as the first m principal component scores (Johnson and Wichern, 2007) obtained from performing a principal components analysis (PCA) of the subset ⁇ (Xj> X2,/, ⁇ -, Xm,/, Y/).
  • PCA principal components analysis
  • This MLR equation is then used as the output of an SQL query.
  • the random errors ⁇ can be generated by
  • Example 1 Linear regression based obfuscating of value data
  • the data for this example was generated from the mixture normal distribution
  • /(X) O ⁇ Z 1 (*) + 0.25/ 2 (x) + 0.1/ 3 (*) + 0.1/ 4 (x) + 0.05/ 5 (x)
  • JC JC
  • sd 5 / 3
  • sd 10 / 4 *
  • / 5 *
  • Table 1 Synthetic database of 10,000 records, its 20-point summary and data for regression
  • the method may apply random rounding on output from weighted regression to further reduce the resolution, as shown in Figure 5.
  • frequency suppression will be used and frequency values below a preset threshold will be suppressed.
  • an example of random rounding with frequency suppression at 90 is used. In this example, if the frequency of a query is less than 90, then the frequency will be suppressed and the output will be annulled. Also, the random rounding procedure used in this example has base of 10, and rounds up a frequency if the last digit of the frequency is even, and rounds down if it is odd.
  • Frequency suppression may not provide sufficient protection against tracker attacks (Duncan and Mukherjee, 2000), since the answer to a query with size less than the specified threshold (90 in the above example) may be computed from a finite sequence of legitimate queries, i.e., queries of sizes above 90 each.
  • a high frequency filter can be configured to hamper such attempts to determine the true values from the obfuscated values.
  • the method and system to obscure data uses a univariate statistical method for obfuscating data in one column of the database, without randomly perturbing the data.
  • a query produces 110 records.
  • the obfuscating approach of the second embodiment can be used on a smaller subset of the data (e.g., a 5-point summary of the data), as demonstrated in Table 2 below.
  • g(x) x 0 ' 25 in this example.
  • the method and system to obscure data uses a multivariate statistical method of PCR for obfuscating data in one or more columns without randomly perturbing the data.
  • the method of this invention can be carried out in steps (a) and (b).
  • the PC-scores are the values of the principal components calculated for each record.
  • the PC-scores are computed and saved in the computer code.
  • Appendix A shows an example of obfuscating multivariate data using the PCR-based approach of this invention.
  • the method also provides a PCR-based solution even for the case when only one column of the database needs obfuscating.
  • the PCR requires a minimum of 2 columns so the invention includes a way to work around this problem. This method is briefly described below:
  • PCA Principal Components Regression
  • the PCR based obfuscating of this invention will output aggregates that are computed from random-rounded versions of values predicted by the above MLR equation.
  • Table 2 Eigenvalues of the sample correlation matrix and proportion of total variance in data explained by the PC's.
  • Table 10 shows the descriptive statistics of performance measures.
  • the variables Ql and Q3 in Table 10 are the first and third quartiles of the error and relative error terms.
  • Table 11 shows the descriptive statistics of the 'true' data in the simulated database
  • Table 12 shows the descriptive statistics of the 'obscured' data.
  • the variables Ql and Q3 in Table 11 are the first and third quartiles of the error and relative error terms.
  • Table 2 shows that for the generated database, first 3 PC's account for 92.1% of the total variation in the data. We therefore used PCl, PC2, and PC3 as the potential predictors for MLR models for the 6 variables in the database.
  • Table 13 shows the number of PC's used in the selected MLR models and the corresponding R values. It should be kept in mind that the amount of obfuscating in the variables is controlled by the number of PC's used in the regression models; smaller the number of PC's used, larger will be the error and the relative error terms.
  • Table 13 Number of PC's used in the model and the corresponding R 2 values
  • the regression based obfuscating method of embodiments 1 and 2 requires a representative subset of values in the column of database to be obfuscated so that the amount of obfuscating can be controlled, hi this embodiment, the regression will be performed on a frequency table representation or histogram of the data.
  • the values in one column of a secret database typically come not from one homogeneous statistical population but a mixture of several statistical populations, which range from very small to very large values.
  • a histogram based on equal class width is created for such values, it is quite difficult to get a good resolution without using an extremely large number of class-intervals, which is not practical (see Figures 12, 13, and 15).
  • a frequency table based upon unequal class widths is first calculated, and the regression is performed on the perturbed versions of the mid-points of these class intervals. The details of this embodiment are given below:
  • M j (L j + U j )/2.
  • the method of weighted least squares is used to fit this straight line to the k pairs of points.
  • the weight W j assigned to the j-th point (M j , Y j ) is taken to be directly proportional to the frequency f j since a class-interval with high frequency should be assigned a higher weight than a class-interval with low frequency.
  • Example 4 Weighted least squares obfuscation based on frequency table of data generated from a mixture normal distribution
  • f 2 (x) is normal with mean 50 and sd 5 ⁇ 3 (JC) is normal with mean 1000 and sd 10 f ⁇ (x) is normal with mean 10000 and sd 100 and f 5 (x) is normal with mean 100000 and sd 100.
  • the method of this invention therefore is based upon a frequency tabulation of the data based upon unequal class intervals (Figure 14).
  • Figure 15 shows a graph of perturbed mid-points (Y) vs. true mid-points (M) of class intervals, for Example 4.
  • Table 14 shows the intermediate calculations for computing the estimates of the intercept a and the slope b of the obfuscating straight line.
  • the obscured values corresponding to each value in the database was the computed from the fitted regression line.
  • Table 15 shows the descriptive statistics of the amount of obfuscation over the entire synthetic database of 10000 records for varying ⁇ values.
  • Table 16 shows the error V (X ._ ⁇ u , .)and the percent relative
  • Example 5 Weighted least squares obfuscating based on frequency table of data generated from the Zipf distribution
  • ⁇ (a) Riemann zeta-function defined as
  • the Zipf distribution can be used to model the probability distribution of
  • Gan et al. (2006) discuss modeling the probability distribution of city-size by the Zipf distribution.
  • H ⁇ rmann and Derflinger developed a rejection-inversion method for generating random numbers from monotone discrete probability distributions.
  • the random variable x has the Zipf distribution with parameter a.
  • Table 15 shows the intermediate calculations for computing the estimates of the intercept a and the slope b of the obfuscating straight line for data of Example 5.
  • the obscured values corresponding to each value in the database was the computed from the fitted regression line.
  • Table 16 Sum of results (true and obscured) and the percent relative error in the sum of a random query of size 50, for data of Example 4.
  • the present invention provides a method and system for obfuscating data that is repeatable, computationally efficient, provides a query language interface, can return identical results for identical records and preserves the confidentiality of the secret data.
  • An independent obfuscation engine isolates obfuscation from the query engine and facilitates operation in a distributed computing environment.
  • Dedicated obfuscation hardware reduces the risk of obfuscation being avoided.

Abstract

A data obfuscation system, method, and computer implementation via software or hardware that allows a legitimate user to gain access via a query to data of sufficient granularity to be useful while maintaining the confidentiality of sensitive information about individual records. Output values of a data request are obfuscated in a repeatable manner, via the use of an Obfuscating Function (OF), whilst maintaining the amount of obfuscation within a range so that the transformed values provide to a user information of a prescribed level of granularity. The data obfuscating system and method is particularly applicable to databases. The data obfuscation engine may be implemented in hardware and/or software within a stand alone or distributed environment.

Description

A DATA OBFUSCATION SYSTEM, METHOD, AND COMPUTER IMPLEMENTATION OF DATA OBFUSCATION FOR SECRET
DATABASES
FIELD OF THE INVENTION
A data obfuscation system, method, and computer implementation via software or hardware, are provided that allows a legitimate user to gain access via a query to data of sufficient granularity to be useful while maintaining the confidentiality of sensitive information about individual records. The data obfuscating system and method is particularly applicable to databases.
BACKGROUND OF THE INVENTION
The problem of securing organizational databases so that legitimate users can access data needed for decision making, while limiting disclosure so that confidential or sensitive information about a single record cannot be inferred, has received considerable attention in the statistical literature.
Data are numbers, characters, images or other outputs from devices that convert physical quantities into symbols. Data can be stored on media, as in databases, numbers can be converted into graphs and charts, which again can be stored on media or printed. Most of the decision making, in business or other disciplines, requires useful data. A database that contains sensitive or confidential information stores data and therefore it is secured - by encryption using a public key, by encryption using a changing public key, in which case the data is held secure while the public key is changed, or by restricting access to it by the operating system.
A database (DB) application must protect the confidentiality of sensitive data and also must provide reasonably accurate aggregates that can be used for decision making. One approach to achieve this goal is to use a statistical database system (SDB). An SDB allows users to access aggregates for subsets of records; the database administrator (DBA) sets a minimum threshold rule on the size of the subset for which aggregates can be accessed. As an example, in an SDB, if a query returns less than or equal to 89 records, then no information is provided to the user for such a query.
A database that obfuscates data is conventionally known as a secret database. A secret database is ideally efficient (stores the data in an efficient manner with minimal overhead), provides a query language (e.g., SQL) interface, is repeatable, i.e., it returns identical results for identical queries, and protects the confidentiality of individual records.
A secret database may be implemented in a parallel fashion as in a parallel set of query pre and post filters. These may be implemented as distributed hardware components given this ability for the obfuscation to be built to handle very large databases and run the queries against the database in a distributed way.
There are a number of known techniques to obfuscate data. In controlled rounding, the cell entries of a two-way table are rounded in such a way that the rounded arrays are forced to be additive along rows and columns and to the grand total (Cox and Ernst, 1982; Cox, 1987). hi random rounding, cell values are rounded up or down in a random fashion; the rows (or columns) may not add up to the corresponding marginal totals. Salazar-Gonzales and Schoch (2004) developed a controlled rounding procedure for two-way tables based upon the integer linear programming algorithm. Gonzales and Cox (2005) developed software for protecting tabular data in two dimensions; this software is uses the linear programming algorithm and implements several techniques for protection of tabular data: complementary cell suppression, minimum-distance controlled rounding, unbiased controlled rounding, subtotals constrained controlled rounding, and controlled tabular adjustment.
Cox (1980) considered the problem of statistical disclosure control for aggregates or tabulation cells and discussed cell suppression methodology under which all cells containing sensitive information are suppressed from publication. Duncan and Lambert (1986) used Bayesian predictive posterior distributions for the assessment of disclosure of individual information, given aggregate data. Duncan and Mukherjee (2000) considered combining query restriction and data obfuscating to thwart stacks by data snoopers. Chowdhury et al. (1999) have developed two new matrix operators for confidentiality protection.
Franconi and Stander (2002) proposed methods for obfuscating business microdata based upon the method of multiple linear regression (MLR); their method consists of fitting an MLR equation to one variable based upon the values of the other variables in the database, using all of the records in the database. There are three potential problems with this approach:
(1) the fitted MLR equations may provide poor prediction, which in turn will lead to very poor resolution in the obscured values,
(2) the amount of obfuscation cannot be controlled, and
(3) since databases are typically very large, this method may not be computationally efficient.
Muralidhar et al. (1999) developed a method for obfuscating multivariate data by adding a random noise to value data; their method preserves the relationships among the variables, and the user is given access to perturbed data. One potential problem with this approach is that a query may not be repeatable, i.e., identical queries may produce random outputs, in which case one can get very close to the true values by running identical queries a large number of times.
Thus, it is desirable to provide a system and method for obfuscating data so that a data request or query is repeatable, and access is allowed to users of the data while limiting the disclosure of confidential information on an individual has increased.
EXEMPLARY EMBODIMENTS
There is disclosed a method of obfuscation in which a standard data query may be submitted to a secret database with output data being obfuscated. According to one exemplary embodiment there is disclosed a method of obfuscating data so that output values of a data request are obfuscated in a repeatable manner, via the use of an Obfuscating Function (OF) whilst maintaining the amount of obfuscation within a range so that the transformed values provide to a user information of a prescribed level of granularity.
There is further disclosed a method of obfuscating data comprising: running an unconstrained query on data in a secret database to produce output data; and obfuscating the output data using a repeatable obfuscation function to return obfuscated data in response to the query.
There is also disclosed an obfuscation system comprising: an input interface; a query engine for receiving input data from the input interface; memory interfaced to the query engine for storing data and supplying data to the query engine in response to a data request; an output interface configured to receive output data from the query engine; and an obfuscation engine for obfuscating data retrieved from memory and obfuscating it in a repeatable manner prior to supplying it to the output interface.
There is further disclosed software for implementing the methods, data produced by the methods, storage media embodying the data produced by the method, hardware for implementing the methods and printed media embodying data produced by the methods.
There is further disclosed an obfuscation system comprising a database, an obfuscation circuit and a user interface wherein the obfuscation circuit operates according to the method and an obfuscation circuit adapted to interface between a database and a user interface which obfuscates data values returned from a database in response to a user query operating in accordance with the method. There is also disclosed a method of representing data having a first level of granularity at a second level of granularity, coarser than the first level of granularity, wherein the data is converted from the first level of granularity to the second level of granularity according to a rule other than the simple proximity of the data to the nearest value at the second level of granularity.
There is also disclosed a method of distributing the processing of the obfuscation such that it is distributed across multiple hardware or virtual hardware or circuit components. These components enable the obfuscation to be executed on very large databases or very large volumes of queries may be processed.
BRIEF DESCRIPTION OF THE DRAWINGS
The accompanying drawings which are incorporated in and constitute part of the specification, illustrate embodiments of the invention and, together with the general description of the invention given above, and the detailed description of embodiments given below, serve to explain the principles of the invention.
Figure 1 shows a system for obfuscating data;
Figure 2 A shows a method for obfuscating data based on addition of random noise to the data values;
Figure 2B shows a method for obfuscating data based on not adding any random noise to the data values;
Figure 3 shows a method for obfuscating data using regression;
Figure 4 shows a histogram of the synthetic data for the regression obfuscating of data in a first example;
Figure 5 shows a method for obfuscating data according to a third embodiment; Figures 6-11 show plots of the obfuscated values (fitted values computed from the linear model) compared to the true x; values for i = 1 , 2, ... , 6;
Figure 12 shows a histogram of values in a synthetic database based upon 49 equal intervals;
Figure 13 shows a histogram of values in a synthetic database based upon 1000 equal intervals. It can be seen from Figure 2 that even when a large number of equal class intervals is used, the frequency table has very poor resolution (especially for small values) and is not suitable for data obscuring;
Figure 14 shows a histogram of values in a synthetic database based upon unequal intervals; the x-axis in this figure is not to scale;
Figure 15 shows a graph of perturbed mid-points (Y) vs. true mid-points (M) of class intervals to which the weighted least squares line is fit;
Figure 16 shows a histogram of values in the synthetic database with 62 equal class intervals;
Figure 17 shows a histogram of values in the synthetic database with 1000 equal class intervals;
Figure 18 shows a histogram of values in the synthetic database with 67 unequal class intervals; and
Figure 19 shows a system for implementing an obfuscation method.
DETAILED DESCRIPTION OF EMBODIMENTS OF THE INVENTION
The system and method for obfuscating data described is particularly applicable to a database and a query language, and more particularly to a structured query language (SQL) database and it is in this context that the system and method for obfuscating data will be described. It will be appreciated, however, that the system and method for obfuscating data has greater utility since it can be used to obscure any type of data however that data may be stored or generated.
Figure 1 illustrates a system for obfuscating data wherein the obscured data may be stored in a secret database, such as a well known structured query language (SQL) database. The system may include an obfuscating unit/function that obscures the underlying data 25 in the database. Thus, when a query is made to the database, such as an SQL query as shown, the database can return results that allow a legitimate user to gain access to data of quality sufficient for making business decisions while maintaining the confidentiality of sensitive information about individual records. The obfuscation engine 24 may be implemented as one or more lines of computer code (that may or may not be part of a database management system software) that implement the different processes for obfuscating the data as described below in each of the embodiments. However, the obfuscating unit/function also may be implemented as a hardware device/circuit or as a combination of hardware and software and the system and method is not limited to any particular implementation of the obfuscating unit/function.
The software or hardware implementation of the system and method of obfuscation of this invention can be implemented in a desktop environment, or a computer cluster. Figure 19 shows a computer system in which a query is received by input interface 10 and a communications handler 11 passes the query to an input filter 12 to ensure that the query is of a permitted type. The input filter 12 may ensure that the granularity of the query matches the granularity of the obfuscation; in general it is necessary to ensure that the boundary conditions on the filters are correct for obfuscation, for example on random rounding the boundaries of filters may be random rounded before the query is processed or on statistical adjustment the input boundaries may be statistically adjusted before the query is executed. This pre-process may be used to limit the final aggregation of the data, hi addition this filter may be used to stop repeated queries that may be persistent access attempts to flood the obfuscation method. The query is then executed by query engine 13. Query engine 13 may apply filters on the queries, for example filtering our queries that restrict to very fine ranges or queries that have a very limited subset of data. This filtering may be applied by the individual selection criteria clause or to the whole of the query. Filtering includes methods such as applying joins or other functional criteria, which then retrieve data for the query from database 14. The output data from query engine 13 is then obfuscated by an obfuscation engine 15.
The output of obfuscation engine 15 passes through output filter 16. The output filter 16 may apply a restriction to the output values using a linear function or a measured threshold value. The output filter may count the number of the frequency of the base data being filtered - for example a simple linear filter filtering records which are constructed of less than X records or a filter based on the variation in the data such that the results must be within certain statistical measures of each other, for example the base data can not vary by more than one standard deviation of the mean. Output filter 16 may also perform additional post query filtering such as distributing results based on filtering (e.g. distributing email, storing output data to different storage media etc.), additional filtering (i.e. only delivering results satisfying a data field criteria) etc. Output filter 16 may include a programmable streaming data processor. The output from output filter 16 may be output by output interface 17 for use by a user or other device.
It will be appreciated that components of the system may be implemented in hardware or software. Each of these processes may be run either singularly or in parallel. When the processes are run in parallel the results of the parallel process may be the same or similar independent of which process executed the request. However, it may be advantageous for the obfuscation engine to be implemented as a specific circuit to ensure obfuscation of all output data. Although the obfuscation engine is shown as a single engine it will be appreciated that data obfuscation may be performed by multiple nodes of a clustered computer system. The data may be abstract data and/or data obtained from real world sensors etc. The output data may drive a display, printer, or some real world device (e.g. where a sensitive location cannot be revealed but a vehicle or person or other kind of moveable vessel needs to be inhibited from travelling within a certain range of a sensitive location), hi another kind of obfuscation relating to tracking of moveable objects the results of this tracking may be presented in aggregate such that obfuscation can be applied to enable output of information such as the number of vehicles or people or vessels in area without the recipients of said information being able to determine the specific time and or place an event or movement occurred.
Systems and methods of data obfuscation described herein are based upon tools used in mathematical modeling, and include:
1) Obfuscation based on weighted regression, in which (a) the dependent variable is a perturbed version of the true value, and the independent variable is the true value itself, (b) the dependent variable is a the true value and the independent variables are functions of the true value and values of other value columns in the database.
2) Obfuscation based upon a Taylor Series Expansion of a function of values in the database.
3) Obfuscation in which data having a first level of granularity is shown to the user at a second level of granularity, coarser than the first level of granularity, wherein the data is converted from the first level of granularity to the second level of granularity according to a rule other than the simple proximity of the data to the nearest value at the second level of granularity.
4) A method of obfuscating data including random rounding, especially a mapping based method based on the last digit. As an example, if the last digit is x, then it gets mapped to 9-x, where x = 0, 1, 2, ..., 8, 9. This method loses information but retains the consistency of the data. Another example is to map all odd frequencies to round down and even frequencies to round up.
5) A method of obfuscating data that is based upon a (Pseudo) random number seeding function which will seed random number generator with input value(s), with output either shifted up or down a number of granularity factors. The seeding function could be taken from some external measure, such as the last digit of the temperature or the time at any point, if repeatability is required it can be held secret and entered by human.
Several different embodiments of the system and methods for obfuscating data are described to illustrate the system and method for obfuscating data. For example, the data may be obscured based upon random rounding and least squares regression involving one or more predictor variables. As another example, the data may be obscured based upon random rounding of frequency data and simple linear regression of value data. As yet another example, the data may be obscured using principal components regression (PCR) wherein the data may be multivariate data.
Using weighted regression each point may be given equal weight, a weighting inversely proportional to its value or a weighting inversely proportional to the mid-point of the class interval and directly proportional to the frequency of the class interval. The Obfuscation Function (OF) may minimize Total Weighted Error. The Total Weighted Error may be the Sum of Squared Errors, the Sum of Absolute Errors, the Sum of Squared Relative Errors or the Sum of Absolute Relative Errors. The method may be matrix algebra based, based upon computer search or based upon neural networks. Random errors may be added to true values to form a dependent variable y for weighted regression, using the true data value as the independent variable. Weighted regression may also be based upon using the true data value as the dependent variable, some function of the true data value as the independent variable, and fitting a regression equation. Alternatively, weighted regression may also be based upon using the true data value as the dependent variable, some function of the true data value and values in other columns as the set of independent variables, and fitting a regression equation.
Weighted regression may be applied to a subset of records obtain an obfuscating function. Weighted regression may include selecting a subset of records which is a k-point summary of values, on which weighted regression is performed to get an obfuscating function.
Obfuscation may be implemented in one of many ways as follows:
1. Each value column in the database can be obfuscated, and then the obfuscated values can be stored in another database; a user may be given access to this obfuscated database. Alternatively, value data can be obfuscated in response to a data request or a query from a user.
2. When the information requested by a user is in the form of a graph or chart, visual display is provided using obfuscated data. The obfuscated data may be used in data visualization systems including geographical information systems and classical graphing systems.
3. The calculations for the obfuscation method can be made within a desktop environment, or on a computer cluster. In the case of a distributed database, which is a database that resides on storage devices that are not all attached to a common CPU but to multiple computers, which may or may not be located in the same physical location, calculations for data obfuscation can be performed in various nodes of a clustered computer. 4. The calculation of frequency in a computer program can be performed using integer type, in which the integer type may be stored in 2, 4, 8, 16, 32, 62, 128, 256, 512 or 1024 bytes in memory.
5. Data can be obfuscated as anticipated, and then pre-calculated; the obfuscated values can be stored for user access.
6. When a data request consists of multiple logical components, restrictions may be applied on these separately. A query, for example an SQL query, may often restrict the data, typically by using the WHERE clause of the SQL statement. These statements look like "where X < 10" OR "where X > 1000". In this case if the database had only one person with X> 1000 then the result set would contain several small values which could be filtered.
7. A request for an aggregate can be handled in two ways: (a) the methods of obfuscation can be applied on value data for which the aggregate is requested, and then the aggregate is computed from the obfuscated data, and (b) the aggregate is first computed based on the true values in the database, and then the aggregate is obfuscated before returning an output to the user.
8. Restrictions can be applied to the data relating to the granularity of the frequency data. If the granularity of the result set is 90 records, then it might be necessary to only allow filters in multiples of 90, in other words 90, 180, 270, 360, 450...
9. The obfuscation amount could be altered in response to a number of factors including security level or frequency of query or granularity of the output data. For example if a query sums all data in the database it will not need obfuscation. This would also be an efficiency application. This would need to be applied to each sub clause in the "Where" statement. 10. The amount of obfuscation can be made to depend upon (a) different use access rights, (b) different levels of frequency of output aggregation. As an example, the output in response to a user with low access rights may have a larger amount of obfuscation than that for a user with high access rights. An example, the output in response to a query with frequency of 110 may have a larger amount of obfuscation than a query with frequency of 505.
11. The obfuscated values may be used in the calculation of an index to the obfuscated data. An index on a database, such as a hash or B-Tree index are pre calculated lookups so queries are faster. The implementation would need to build indexes on the obfuscated values.
12. The original data can be stored in a secured format, for example, in an encrypted database, where the access to the data is restricted by the operating system. The encryption can be done using a public key encryption method, which may be changed in a secure manner. The encryption can be applied to the output data as well.
13. The system and method of this invention produces obfuscated data. The output data can be used for further calculations in many ways and for many purposes: to compute statistical summary of database, to perform some heuristic calculations on values in the database, to prepare graphs in a graphic software package or a geographic information system. The obfuscated data can of course be printed, or stored on any media for further computations, or even encrypted.
14. When data is transmitted between nodes of a clustered computer, there may be errors in the transmission. This transmission can be at any level of the solution stack. These error checks monitor for transmission errors. This is of particular value in the between node communications in a cluster. The integrity check value is implemented with a technology selected from the group consisting of: CRC (cyclic redundancy check), hash, MD5, SHA-I, SHA-2, HMAC (keyed-hash message authentication code), partial-hash- value and parity checks. If the solution is implemented as a middleware layer the communications between the database cluster and the middleware are integrity checked.
In one embodiment, the obfuscating unit/function may use a least squares regression for obfuscating the base data and random rounding is then applied to output from regression to further reduce resolution. In broad terms, this embodiment uses a method of least squares regression in a computationally efficient way that allows the user to control the amount of obfuscating while maintaining repeatability of a query. The obfuscating unit in this embodiment provides a database application (DBA) such as a database management application with the flexibility of obfuscating data with or without adding a random noise to the values in the database, is easier to implement as it is based on regression which may be applied to all N records in the database in case N is moderate and to a subset of records in the database in case N is extremely large, and will yield identical outputs to identical queries, even if the value data is perturbed by adding a random noise.
In this embodiment, the obfuscating of value data is done by performing weighted regression. Weights for data points used in regression can be chosen in any one of two ways:
(1) Weights proportional to some function of value, and
(2) Equal weights, in which case the Weighted Least Squares Regression becomes Ordinary Least Square Regression.
hi a regression modeling application, a linear model is fitted to a dependent variable 7 as a function of predictor variables Xi, X2, .... Xp; where /? can equal 1 (in which case obfuscating is done on one column) orp can be greater than 1 (in which case several columns need to be obscured at once). The data needed for regression consists of a set of n records {{Xi,u Xi.u —, Xp.i, Yi)}, where n is a positive integer, p < n = N.
The method of weighted regression for the case of one predictor variable (p = 1) is briefly described below: yi = a + bxi + ei et = yt - ia + bχi)
The unknown parameters a and b are chosen so as to minimize a measure of departure from the model, which we will refere to as the
Total Weighted Error = H(a,b).
Total Weighted Error can be defined in one of several ways:
(1) Weighted Sum of Squared Errors
Figure imgf000016_0001
(2) Weighted Sum of Absolute Errors H(a,b) = ∑\ ei \=∑wi \ (yi -a -bxi) \
I=I (=1
(3) Weighted Sum of Squared Relative Errors
Figure imgf000016_0002
(4) Weighted Sum of Absolute Relative Errors
Figure imgf000016_0003
Total Weighted Error as defined in (1) or (3) can be minimized by using matrix algebra, whereas the Total Weighted Error as defined in (2) or (4) need to be minimized using a computer search method, or an artificial neural network. To apply weighted regression, the variable xTRUE , is used as a predictor or as a dependent variable, depending upon the obfuscating method used, where X TRUE , *s me true vauαe of the data in record /, and the variables (Xjj, X24, ..., Xp,ι) can be chosen in one of several ways depending upon the following:
(a) p = number of columns of data to be obscured = 1
The choice for Yj depends upon whether a random error will be added to perturb the data or not as follows:
X) A method based upon addition of a random noise to the data values
Y1 = XTRUEti + e, where e, is random error with mean 0 and variance σ2,as shown in Figure 2a; or
2) A method based upon not adding any random noise to the data values
Calculate X1 = g(XTRUE , ), where g(X) is some non-linear function of X, as shown i Figure 2b.
Set the dependent variable as Yt = XTRUE , and the predictor as X1 = g(XTRUE ,) and fit the straight line Y1 = a + bXt + e, to the (X ,. , Yi ) data.
This fitted line is used as the output of an SQL query.
(b) p = number of columns of data to be obscured > 1 This case can be handled in one of two ways:
(i) Using the univariate method for case p = 1 (given above) on each of the m columns of data to be obscured, or
(ii) Using the method of least squares to fit a multiple regression model
Y1 = β0 + βxXλ i + ... + βpXm i + et to the data {(Xυ, X24, .... XmX Yt)}, i =1, 2,
..., n} where m <p and the m predictors Xj4, X^ ..., Xmχ are derived from the true values x x X - As an example, the m predictors can be taken as the first m principal component scores (Johnson and Wichern, 2007) obtained from performing a principal components analysis (PCA) of the subset {(Xj> X2,/, ■■-, Xm,/, Y/). ' =1. 2 n}.
This MLR equation is then used as the output of an SQL query.
We will now provide the details of the above three embodiments.
First Embodiment for Obfuscating Data hi the first embodiment, the method uses χTRUE _ to denote the true value of a column (variable) in a database corresponding to the i-th record, i=l,2, ...N, where N is the total number of records in the database. The method of weighted regression for data obfuscation can of course be applied to all of the N records, hi the method for base data obfuscating in this embodiment, however, a representative subset of χTRυE values are selected and then a random error with variance proportional to the magnitude of XτRUE value is added to each of the values in the selected subset to obtain n pairs of data points (x,,>>, ) where = χTRUE . and y. - x. + et i = 1,2,...« • The random errors ^ can be generated by
first generating e, from a normal population with mean 0 and common standard deviation σ, and then multiplying e, by Jx1 . Since the errors thus generated have variance proportional to Xj , the weighted least squares regression method is used; this involves minimizing the function H (a,b) - V wt Sy1 ; - (α + Z>x,. )1 » where i=l
Figure imgf000018_0001
By setting the derivatives oϊH(a,b) with respect to a and b equal to 0, and solving the resulting system of the two linear equations in two unknowns a and b, we obtain the estimates of a and b:
Figure imgf000019_0001
β = ∑wΛ -*∑wΛ (2)
I=I I=I where a - estimate of a, and b = estimate of b.
The subset {.x,, X2,... Jtn } can be selected in many ways. We will use the following simple method to select a subset of size «=20 for data obfuscating:
For each column xTRUE in the database, determine its 20-point summary as described below: min
Xχ = \ ≤ i ≤ N XmυE4 xp = 5p-th percentile of the dataixTRUE ,.,/ = 1,2,..JVj , /> = 1,2,...,18 max
^^ l ≤ i ≤ N *™*'*
The method in this embodiment of data obfuscating can now be implemented as follows:
1. Compute the 20-point summary of the data (xπuEj'yt)* i = 1,2,...N, and obtain *.,; = i,2,...2O .
2. Generate e, from the normal distribution with mean 0 and standard deviation σ.
3. Calculate yt = X1 + e, -Jx] .
4. Calculate weights w. =
Figure imgf000019_0002
5. Calculate the weighted least squares estimates ά and έusing the equations (1) and (2) given above. 6. If the output of a query includes χTRUE then its obscured version ^OBSCURED} = ά + b XTRUEtl is used place of XτRUE ι .
Now, examples of the weighted regression based obfuscating of base data and obfuscating aggregate data using random rounding are described.
Example 1 : Linear regression based obfuscating of value data
The data for this example was generated from the mixture normal distribution
/(X) = O^Z1 (*) + 0.25/2 (x) + 0.1/3 (*) + 0.1/4 (x) + 0.05/5 (x)
where fx (χ)is normal with mean 5 and sd 1
/2 (JC) is normal with mean 50 and sd 5 /3 (χ)is normal with mean 1000 and sd 10 /4 (*) is normal with mean 10000 and sd 100 and /5 (*) is normal with mean 100000 and sd 100.
A total of N=10000 data points were generated; this set of 10000 records is our database [χTRUE l,i = 1,2,...,10000} . A histogram of this synthetic data is shown in Figure 4. The 20-point summary of this set was then calculated. The errors e,,i = 1,2,..., 20 were generated from the normal distribution with mean 0 and sd σ = 0.5, and then y. is calculated as follows: y, = x, +et, i = 1,2,...,20
A straight line is then fitted to the (x,, yj) data where x, = XTRUEJ , by using the method of weighted least squares, with weights inversely proportional to the variance of the error terms. The intermediate calculations for this example are shown below.
Figure imgf000021_0001
Figure imgf000021_0003
Table 1: Synthetic database of 10,000 records, its 20-point summary and data for regression
The calculations for a and b for the above data are shown below:
Figure imgf000021_0002
a = ∑ w,y, -b∑ wiXi =2.737711-1.1205*2.985032 = -0.6071
I=I
We now illustrate the regression based data obfuscating on the results obtained from a query. Suppose a query on our synthetic database produced the 15 records, shown in the first column of Table 1. The second column shows the values of Xobscured = -.6071 + 1.1205 χ lrø£ computed from the weighted regression line calculated above. The last column shows the values of diff = XTRUE - Xobscured j which is the amount of obfuscating in the base data. Mobβiaj Biff!
6.673162 6.870178021 -0.19702
55.20078 61.24537399 -6.04459
4.921611 4.907565126 0.014046
100135.7 112201.4448 -12065.7
3.346527 3.142683504 0.203843
45.62813 50.51921967 -4.89109
4.382109 4.303053135 0.079056
10084.94 11299.56817 -1214.63
5.707111 5.787717876 -0.08061
3.530369 3.348678465 0.181691
990.7026 1109.475163 -118.773
9920.742 11115.58431 -1194.84
100024 112076.2849 -12052.3
1070.893 1199.328507 -128.436
1222356:1 ZfflWWm I26785P
Table 2: The true and obscured values for the sample query
The percent relative difference is calculated from the formula
100 x y I lrue ~ — °bscured I= 12.04%
Λ true
As mentioned earlier, the method may apply random rounding on output from weighted regression to further reduce the resolution, as shown in Figure 5.
Example 2: Obfuscating the aggregate
In obfuscating the aggregate, frequency suppression will be used and frequency values below a preset threshold will be suppressed. To illustrate this procedure, an example of random rounding with frequency suppression at 90 is used. In this example, if the frequency of a query is less than 90, then the frequency will be suppressed and the output will be annulled. Also, the random rounding procedure used in this example has base of 10, and rounds up a frequency if the last digit of the frequency is even, and rounds down if it is odd.
Figure imgf000023_0001
Frequency suppression, however, may not provide sufficient protection against tracker attacks (Duncan and Mukherjee, 2000), since the answer to a query with size less than the specified threshold (90 in the above example) may be computed from a finite sequence of legitimate queries, i.e., queries of sizes above 90 each. A high frequency filter can be configured to hamper such attempts to determine the true values from the obfuscated values.
In the following table, we illustrate the method of random rounding as applied to values; here we use notation rrX for random rounded X, and base equals 5 x mean.
Figure imgf000023_0002
Second Embodiment for Obfuscating Data
In this embodiment, the method and system to obscure data uses a univariate statistical method for obfuscating data in one column of the database, without randomly perturbing the data.
(1) Calculate X1 = g(XTRUE ,), where g(X) is some non-linear function of X.
(2) Set the dependent variable as Yt = XTRUE , and the predictor as X1 = g(XTRUE ,) and the straight line Yt = a + bXi + e,. to the (XnY1) data.
This fitted line is used as the output of an SQL query. Example 2: Obfuscating of one column without adding a random error
Suppose a query produces 110 records. The obfuscating approach of the second embodiment can be used on a smaller subset of the data (e.g., a 5-point summary of the data), as demonstrated in Table 2 below. We have used g(x) = x0'25 in this example.
minimum 641.1 5.03189
5-point summary Q1 890.5 5.46271 M data median 1009 5.63602
Q3 1098.3 5.75679 maximum 1369.6 6.08343
The regression equation fitted to the
5 data points (g (X) r
X) is
X = - 2846 + €88 g (x) with si.χ) = χ°-25
Figure imgf000024_0001
Table 2: Intermediate calculations for data of Example 2
The obscured values for x are calculated from the above regression line:
W^,- = " 2846 + 688 *' 0.25
The descriptive statistics of the residual X1 — xobscured4 are given below:
Variable Mean sd Minimum Ql Median Q3 Maximum resid -13.82 12.59 -23.63 -22.55 -18.45 -10.87 30.20
Third Embodiment for Obfuscating Data
In this embodiment, the method and system to obscure data uses a multivariate statistical method of PCR for obfuscating data in one or more columns without randomly perturbing the data. OBFUSCATING OF SEVERAL COLUMNS AT ONCE
The method of this invention can be carried out in steps (a) and (b).
Step (a): Perform Principal Components Analysis (PCA) of data in records produced by a query in the following 8 steps:
1) If several columns need to be obscured at once, then the data in these columns (say/? columns) of the database are first read by the computer code.
2) The correlation matrix R of the multivariate data is computed.
3) An eigenanalysis of R is performed, which yields p eigenvalue- eigenvector pairs (λ,,e,), λt ≥ A2 ≥ ... > λp > 0 where A1 is the i-th eigenvalue and I1 is the corresponding eigenvector and p is the number of columns to be obscured.
4) An integer m < p is selected (smaller the value of m, larger the amount of obfuscating).
5) Principal components (PC's) PCi, PC2, ..., PCm are calculated from the formula
PCi = ti TX = £nXl +t2lX2 +... + tplXp .
6) The PC-scores are the values of the principal components calculated for each record. The PC-scores are computed and saved in the computer code.
7) The first m PC's explain 100 x ^ + ^2 + '" + m percent of variation
in the data set. A multiple regression equation is obtained for each of columns Xi . Taken together, the p principal components explain 100% of variation in the data. 8) Save the principal components scores (values of the PC's for the records produced by the query) which will be denoted by
PC1 1 , PC2,, , ..., PC f j
Step (b): Run PCR for each column X1 as a function of the first m PC- scores PC1 : pc. : pc :
Use the method of least squares to fit the linear model
where
Xj , = true value of they-th column for the j-th record j = 1, 2, ..., p; i = l,2,...,n p = the number of columns to be obscured n - the number of records produced by the query m = the number of PCs selected for obscuring
Appendix A shows an example of obfuscating multivariate data using the PCR-based approach of this invention.
OBFUSCATING OF ONE COLUMN ONLY BY THE METHOD OF PCR
The method also provides a PCR-based solution even for the case when only one column of the database needs obfuscating. As explained above, the PCR requires a minimum of 2 columns so the invention includes a way to work around this problem. This method is briefly described below:
1) Suppose a query produces n records with values Xi , where X is the column in the database that needs obfuscating. Create p-1 additional columns gj (X1 ) where gj (X) is a non-linear function of X where
X1 = value of the column X for the i-th record j = 1, 2, ..., p-l; / = 1,2,...,« p -1 = the number of additional columns created n = the number of records produced by the query 2) Perform PCA, save PC - scores, and then use Principal Components Regression (PCR) to obtain an equation for X as a function of the first m PC's .
The PCR based obfuscating of this invention will output aggregates that are computed from random-rounded versions of values predicted by the above MLR equation.
Example 3: PCR based obfuscating of several columns
We created a small database with N = 1000 records and p = 6 variables from a multivariate normal distribution. The sample correlation matrix of the data generated is given in Table 1.
Table 1. Sam le correlation matrix of the enerated data
Figure imgf000027_0001
The results of PCA are shown in Table 2 (eigenvalues and proportion of variance explained) and Table 3 (PC loadings).
Table 2: Eigenvalues of the sample correlation matrix and proportion of total variance in data explained by the PC's.
Figure imgf000027_0002
Table 3: PC loadings
Figure imgf000027_0003
The linear models for to Xi ( i =1, 2, ..., 6) using the PC-scores as the predictors, along with their respective R2 values, are given in Tables 4 - 9.
Figure imgf000028_0001
Table 5: The regression model for X;
Figure imgf000028_0002
Table 6: The re ression model for X3 PC2 and PC3 terms were not si nificant
Figure imgf000028_0003
Table 7: The regression model for X4 (PC2 and PC3 terms were not significant)
Predictor Coef SE Coef T P
Constant 400.649 0.241 1664.96 0.000
PCl 7.8820 0.1133 69.59 0.000
Regression equation: X4 = 401 + 7.88 PCl (R2 = 82.9% )
Table 8: The re ression model for X5 PC3 term was not si nificant
Figure imgf000028_0004
Figure imgf000029_0002
Figures 6-11 show plots of the obscured values (fitted values computed from the above linear model) vs,. the true x; values for i = 1, 2, ..., 6.
To assess the performance of the proposed data obfuscating method, we calculated descriptive statistics for the N values of error = *,.,. - x,. y. and also relative error = 100x|(x, j - x, y ) / x, y |
Table 10 shows the descriptive statistics of performance measures. The variables Ql and Q3 in Table 10 are the first and third quartiles of the error and relative error terms.
Figure imgf000029_0001
Figure imgf000029_0003
Table 11 shows the descriptive statistics of the 'true' data in the simulated database, and Table 12 shows the descriptive statistics of the 'obscured' data. The variables Ql and Q3 in Table 11 are the first and third quartiles of the error and relative error terms.
Table 11 : Summary statistics of the generated data
Figure imgf000030_0001
Table 12: Summary statistics of the obscured data
Figure imgf000030_0002
Discussion of Results for Third Embodiment
Table 2 shows that for the generated database, first 3 PC's account for 92.1% of the total variation in the data. We therefore used PCl, PC2, and PC3 as the potential predictors for MLR models for the 6 variables in the database. Table 13 shows the number of PC's used in the selected MLR models and the corresponding R values. It should be kept in mind that the amount of obfuscating in the variables is controlled by the number of PC's used in the regression models; smaller the number of PC's used, larger will be the error and the relative error terms. Table 13: Number of PC's used in the model and the corresponding R2 values
Figure imgf000031_0001
It can be seen from Table 10 that the mean relative error is around 1% and the maximum relative error ranges from approximately 4% to 20%. Tables 11 and 12 show the close agreement between the true and obscured values.
Detailed Description of The Fourth Embodiment
The regression based obfuscating method of embodiments 1 and 2 requires a representative subset of values in the column of database to be obfuscated so that the amount of obfuscating can be controlled, hi this embodiment, the regression will be performed on a frequency table representation or histogram of the data. The values in one column of a secret database typically come not from one homogeneous statistical population but a mixture of several statistical populations, which range from very small to very large values. When a histogram based on equal class width is created for such values, it is quite difficult to get a good resolution without using an extremely large number of class-intervals, which is not practical (see Figures 12, 13, and 15). In this embodiment, a frequency table based upon unequal class widths is first calculated, and the regression is performed on the perturbed versions of the mid-points of these class intervals. The details of this embodiment are given below:
1) The range of the data values is divided into k class-intervals (L1, Ui), (L2, U2), ..., (Lk, Uk) where L1 = minimum value, Uk ≥ maximum value, and the width Uj - L3- of the j-th class-interval is proportional to the middle point
Mj = (Lj + Uj)/2.
2) The number of values in the database column falling inside the j-th class-interval is calculated. Let fj denote this frequency.
3) Random errors βj (j = 1, 2, ..., k) are generated from a normal distribution with mean 0 and variance Mi to make the error proportional to the magnitude of the middle point of the class-interval.
4) The mid-points Mj are perturbed by adding error βj, j = 1 , 2, ... , k. This yields the values of the dependent variable Yj, j = 1, 2, ..., k.
5) A straight line Y = a + b M is fitted to the k pairs of points (Mj, Yj) obtained in Step 4 above. The method of weighted least squares is used to fit this straight line to the k pairs of points.
6) The weight Wj assigned to the j-th point (Mj, Yj) is taken to be directly proportional to the frequency fj since a class-interval with high frequency should be assigned a higher weight than a class-interval with low frequency.
7) The weight Wj assigned to the j-th point (Mj, Yj) is taken to be inversely proportional to the mid-point Mj since large values will need a large amount of perturbation.
8) The requirements in steps 6 and 7 above lead to the following weights:
W. = c c =
Figure imgf000032_0001
9) The constants a and b of the straight line of step 5 above are estimated from the following equations (see Cardno and Singh, 2008):
Figure imgf000033_0001
where the weights Wj are given in step 8 above, and a = estimate of a, and b = estimate of b.
10) If the result of a query includes the i-th record in the database with true value XTRUE,Ϊ then the query outputs aggregate calculated from the following equation:
^obfuscated,! "^"^TRUE.i
We next describe two examples of frequency table based obfuscating of values in one column of a database.
Example 4: Weighted least squares obfuscation based on frequency table of data generated from a mixture normal distribution
The data for this example was generated from the mixture normal distribution f(x) = O.5fι (x) + O25f2 (x) + OΛfi (x) + OΛfΛ (x) + O.O5f5 (x)
where fλ (x)is normal with mean 5 and sd 1
f2 (x) is normal with mean 50 and sd 5 ^3 (JC) is normal with mean 1000 and sd 10 fΛ (x) is normal with mean 10000 and sd 100 and f5 (x) is normal with mean 100000 and sd 100.
A total of N=IOOOO data points were generated; this set of 10000 records constitutes our database
Figure imgf000034_0001
.
A histogram based upon equal class-intervals with k = 49 is shown in Figure 12. It can be seen that the resolution is very poor for obfuscating. A histogram based upon equal class-intervals with k = 1000 (Figure 13) does not improve the resolution by much. The method of this invention therefore is based upon a frequency tabulation of the data based upon unequal class intervals (Figure 14). Figure 15 shows a graph of perturbed mid-points (Y) vs. true mid-points (M) of class intervals, for Example 4.
Table 14 shows the intermediate calculations for computing the estimates of the intercept a and the slope b of the obfuscating straight line. The obscured values corresponding to each value in the database was the computed from the fitted regression line.
Table 15 shows the descriptive statistics of the amount of obfuscation over the entire synthetic database of 10000 records for varying σ values.
50
Table 16 shows the error V (X ._χ u , .)and the percent relative
J=I 50 error lOOx ∑KX^-X^^/X^ | in the aggregate of a random query of size 50 for varying σ values. Example 5: Weighted least squares obfuscating based on frequency table of data generated from the Zipf distribution
The Zipf probability distribution, sometimes referred to as the "zeta distribution" is given by
f(x;a) = , jc = 1,2,3,...;α > l is a constant ς(μ)xa where ς(a) = Riemann zeta-function defined as
Figure imgf000035_0001
The Zipf distribution can be used to model the probability distribution of
rank data in which the probability of the n-th ranked item is given by . ς(a)na
Gan et al. (2006) discuss modeling the probability distribution of city-size by the Zipf distribution.
Hδrmann and Derflinger (1996) developed a rejection-inversion method for generating random numbers from monotone discrete probability distributions. For this example, we used the acceptance-rejection method of Devroye (1986) to generate a synthetic database of 10000 records. This method is briefly described below:
1) Generate M1 and U2 from the uniform distribution on the interval (0, 1).
2) Set x = «-1/(α-1), t = | l + i t 2α-' -l
3) Accept x if x ≤ t t --1v 2 A"~-1Uu2
The random variable x has the Zipf distribution with parameter a.
For Example 5, a synthetic database of 10000 integer values were generated using the method 5 for generating random numbers from the Zipf distribution with α = 2. Figures 16 and 17 show histograms of these values for k=62 and k=1000 class intervals of equal width. Figure 18 shows a histogram with k = 67 intervals of increasing widths.
Table 15 shows the intermediate calculations for computing the estimates of the intercept a and the slope b of the obfuscating straight line for data of Example 5. The obscured values corresponding to each value in the database was the computed from the fitted regression line.
Table 16 shows the descriptive statistics of the amount of obfuscating over the entire synthetic database of 10000 records for σ =20, 40, 60, 80, 100. so
Table 17 shows the error \\ (XTRUE -X obscured ) an(^ ^e Percent relative j=i
50 error 100 x £|(XTRUEj.-Xobscuredd)/XTRUE>). |
in the aggregate of a random query of size 50 for varying σ values.
Figure imgf000037_0001
Calculations for the obscuring line for Example 4 : Obscuring straight line is y = X obScured = a + bxTRUE where y W1X v - cy wx )(y wy ) b =
∑wΛ 2 -(∑wΛ)2 wxx - (wx)2
53199.92-8.966084 x 0.046453
= 0.952554
50599.63-8.9660842 a = wy - b x wx = 0.046453-0.952554 x 8.966084 = - 8.49423
Table 15: Descriptive statistics of XTRUEJ - Xobscuredj ( j = 1, 2, ..., 10000) for the s nthetic database for σ = 20, 40, ..., 120, for data of Exam le 4.
Figure imgf000038_0001
Table 16: Sum of results (true and obscured) and the percent relative error in the sum of a random query of size 50, for data of Example 4
Figure imgf000038_0002
Table 17: Calculations for fitting the obscuring straight line by weighted least s uares method for data of Example 4
Figure imgf000039_0001
Figure imgf000040_0002
Calculations for the obscuring line for Example 5 : Obscuring straight line is y = Xobscured = ά + bXτRUE where
0 8671647 b
Figure imgf000040_0001
a = wy -b x wx = 3.787212
Table 18: Descriptive statistics of XTRUEJ - Xobscuredj ( j = 1 , 2, ... , 10000) for the synthetic database of Exam le 5 for σ = 20 40 ... 100.
Figure imgf000040_0003
Table 19: Sum of results (true and obscured) and the percent relative error in the sum of a random uer of size 50 for the s nthetic database of Exam le 2 for σ = 20, 40, ..., 100.
Figure imgf000040_0004
It will thus be seen that the present invention provides a method and system for obfuscating data that is repeatable, computationally efficient, provides a query language interface, can return identical results for identical records and preserves the confidentiality of the secret data. An independent obfuscation engine isolates obfuscation from the query engine and facilitates operation in a distributed computing environment. Dedicated obfuscation hardware reduces the risk of obfuscation being avoided.
While the present invention has been illustrated by the description of the embodiments thereof, and while the embodiments have been described in detail, it is not the intention of the Applicant to restrict or in any way limit the scope of the appended claims to such detail. Additional advantages and modifications will readily appear to those skilled in the art. Therefore, the invention in its broader aspects is not limited to the specific details, representative apparatus and method, and illustrative examples shown and described. Accordingly, departures may be made from such details without departure from the spirit or scope of the Applicant's general inventive concept.
REFERENCES
Cox, Lawrence H. (1980). Suppression methodology and statistical disclosure control. Journal of the American Statistical Association, pp. 377 - 385.
Cox, Lawrence H. (1987). A constructive procedure for unbiased controlled rounding. Journal of the American Statistical Association, pp. 520 - 524.
Duncan, George T. and Lambert, Dianne (1986). Disclosure-limited data dissemination. Journal of the American Statistical Association, pp. 10 - 18.
Duncan, George T. and Mukherjee, Sumitra (2000). Optimal Disclosure-limited strategy in statistical databases: deterring tracker attacks through additive noise. Journal of the American Statistical Association, pp. 720 - 729.
Franconi, Louisa and Stander, Julian (2002). A model - based method for disclosure limitation of business microdata. The Statistician, pp. 51 - 61.
Gonzalez Jr, Joe Fred, and Cox, Lawrence H.(2005). Software for tabular data protection. Statistics in Medicine. Vol. 24:659-669
Johnson, R. A. and Wichern, D. W. (2007). Applied Multivariate Statistical Analysis, 6th Edition. Prentice Hall, Jew Jersey.
Kelly, James P. , Golden, Bruce L., Assad Arjang A, and Baker, Edward K. (1990). Controlled Rounding of Tabular Data. Operations Research, Vol. 38, No. 5. (Sep. - Oct., 1990), pp. 760-772.
Kutner, Michael H, Neter, John, Nachtsheim, Chris J., and Wasserman, William
(2006).
Applied Linear Regression Models. McGraw-Hill/Irwin.
(http://doi.contentdirections.com/mr/mgh.jsp?doi=l 0.1036/0072386916)
Nargundkar, M. S., and Saveland, W. (1972). Random rounding to prevent statistical disclosures. Proceedings of the Social Statistics Section, American Statistical Association, pp. 382 - 385.
Salazar-Gonz'alez, Juan- Jos 'e and Schoch, Markus (2004). A New Tool for Applying Controlled Rounding to a Statistical Table in Microsoft Excel. Lecture Notes in Computer Science, Volume 3050, Springer- Verlag, 44-57.

Claims

CLAIMS:
1. A method of obfuscating data so that output values of a data request are obfuscated in a repeatable manner, via the use of an Obfuscating Function (OF) whilst maintaining the amount of obfuscation within a range so that the transformed values provide to a user information of a prescribed level of granularity.
2. A method of obfuscating data as claimed in claim 1 wherein the OF uses a transformation used in mathematical modeling.
3. A method of obfuscating data as claimed in claim 2, wherein the transformation is based upon the method of weighted regression.
4. A method of obfuscating data as claimed in claim 3 wherein each point used in the weighted regression gets equal weight.
5. A method of obfuscating data as claimed in claim 3, wherein the weight of each point is inversely proportional to value.
6. A method of obfuscating data as claimed in claim 3 wherein the weight of each point is inversely proportional to the mid-point of the class interval and directly proportional to the frequency of the class interval.
7. A method of obfuscating data as claimed in any one of claims 4 to 6 based upon minimizing a Total Weighted Error.
8. A method of obfuscating data as claimed in claim 7 in which the Total Weighted Error is defined to be the Sum of Squared Errors.
9. A method of obfuscating data as claimed in claim 7 in which the Total Weighted Error is defined to be the Sum of Absolute Errors.
10. A method of obfuscating data as claimed in claim 7 in which the Total Weighted Errors defined to be the Sum of Squared Relative Errors.
11. A method of obfuscating data as claimed in claim 7 in which the Total Weighted Error is defined to be the Sum of Absolute Relative Errors.
12. A method of obfuscating data as claimed in any one of Claims 8 through 11 that is matrix algebra based.
13. A method of obfuscating data as claimed in any one of claims 8 through 11, that is based upon computer search.
14. A method of obfuscating data as claimed in any one of claims 8 through 11 that is based upon neural networks.
15. A method of obfuscating data as claimed in claim 3 including perturbing values to obtain a dependent variable for the weighted regression.
16. A method of obfuscating data as claimed in claim 3, in which the true data value can be used as the independent variable.
17. A method of obfuscating data as claimed in claim 15, based upon addition of random errors to true values to form a dependent variable y for weighted regression, and using the true data value as the independent variable.
18. A method of obfuscating data as claimed in claim 3, which is based upon using the true data value as the dependent variable, some function of the true data value as the independent variable, and fitting a regression equation.
19. A method of obfuscating data as claimed in claim 3, which is based upon using the true data value as the dependent variable, some function of the true data value and values in other columns as the set of independent variables, and fitting a regression equation.
20. A method of obfuscating data as claimed in any one of claims 16 to 19 including selecting a subset of records on which the method of weighted regression will be applied to obtain an obfuscating function.
21. A method of obfuscating data as claimed in any one of claims 16 to 19 including selecting a subset of records which is a k-point summary of values, on which weighted regression is performed to get an obfuscating function.
22. A method of obfuscating data as claimed in claim 20 including selecting a subset of records which is a 20-point summary of values, on which weighted regression is performed to get an obfuscating function.
23. A method of obfuscating data as claimed in claim 20 including selecting a subset of records in which the entire set of records in the database is used in performing the weighted regression.
24. A method of obfuscating data as claimed in claim 20 including selecting a subset of records in which the set of all records that satisfy conditions of the query submitted by a user, is used in performing the weighted regression.
25. A method of obfuscating data as claimed in claim 2, which is based upon the Taylor Series Expansion of a function of values in the database.
26. A method of obfuscating data of Claim 1, in which frequency data having a first level of granularity is shown to the user at a second level of granularity, coarser than the first level of granularity, wherein the data is converted from the first level of granularity to the second level of granularity according to a rule other than the simple proximity of the data to the nearest value at the second level of granularity.
27. A method of obfuscating data, as claimed in Claim 26, that is a mapping based method based on the last digit.
28. A method of obfuscating data, as claimed in Claim 26, using random rounding based upon an odd number being rounded up and an even number being rounded down.
29. A method of obfuscating data, as claimed in Claim 26, using random rounding based upon an even number being rounded up and an odd number being rounded down,
30. A method of obfuscating data, as claimed in Claim 26, using a pseudo random number seeding function which will seed a random number generator with input values, with its output either rounded up or rounded down, or results shifted by a rounded amount up or down
31. A method of obfuscating data, as claimed in Claim 26, using a pseudo random number seeding function which will seed a random number generator with input values, with its output either shifted up or down a number of granularity factors.
32. A method of obfuscating data as claimed in any one of claims 26 to 31 including obfuscating additional columns of data based on the granularity movement of previously obfuscated data.
33. A method of obfuscating data as claimed in Claim 1 wherein obfuscated data is supplied in response to a query.
34. A method of obfuscating data as claimed in Claim 1 wherein obfuscated data is visually displayed.
35. A method of obfuscating data as claimed in Claim 1 wherein calculations for data obfuscation are performed in nodes of a clustered computer.
36. A method of obfuscating data as claimed in Claim 1, wherein the obfuscated data is precalculated or cached.
37. A method of obfuscating data as claimed in Claim 1 including a high frequency filter configured to hamper attempts to determine the true values from the obfuscated values.
38. A method of obfuscating data as claimed in Claim lin which obfuscated data is supplied in response to a query by a user.
39. A method of obfuscating data as claimed in any one of claims 26 to 31 in which the calculation of frequency is performed using integer type.
40. A method of obfuscating data as claimed in claim 39 in which the integer type is stored in 2, 4, 8, 16, 32, 62, 128, 256, 512 or 1024 bytes in memory.
41. A method of obfuscating data as claimed in any one of claims 1 to 32 in which the obfuscated values are anticipated and pre-calculated.
42. A method of obfuscating data as claimed in any one of claims 1 to 32 wherein the data request consists of multiple logical components, each being applied separately.
43. A method as claimed in claim 42 where the components are applied before the aggregation is calculated.
44. A method as claimed in claim 42 where the components are applied after the aggregation is calculated.
45. A method of obfuscating data as claimed in any one of claims 42 to 44 in which restrictions are applied to the data relating to the granularity of the frequency data.
46. A method of obfuscating data as claimed in any one of claims 1 to 32 in which the amount of obfuscation is altered in response to an algorithm.
47. A method as claimed in Claim 46 where the alteration of the amount of obfuscation is made in response to different use access rights.
48. A method as claimed in Claim 47 where the alteration of the amount of obfuscation is made in response to different levels of frequency of output aggregation allowing the data to be of sufficient quality to make business decisions.
49. A method as claimed in 1 through 32 where the obfuscated values are used in the calculation of an index to the obfuscated data.
50. A method as claimed in Claim 39 where the calculation occurs within a desktop environment.
51. A method as claimed in any one of claims 1 through 32 where the original data is stored in secured format.
52. A method as claimed in Claim 51 where the secured format is an encrypted database.
53. A method as claimed in Claim 52 where the original data is stored in a database where the access to the data is restricted by the operating system.
54. A method as claimed in Claim 53 where the original data is encrypted using public key encryption method.
55. A method as claimed in Claim 53 where the encryption key is changed and the data is held secure while the key is changed.
56. A method as claimed in any one of claims 1 to 32 wherein the output data is encrypted.
57. A method as claimed in any one of claims 1 through 32 where the output data is used for further calculation.
58. A method as claimed in 57 wherein the further calculation is a statistical calculation.
59. A method as claimed in 57 wherein the further calculation employs heuristic methods.
60. A method as claimed in Claim 57 wherein the output data is used in data visualization.
61. A method as claimed in Claim 60 wherein the output data is used in a geographic information system
62. A method as claimed in claim 61 wherein the output data is used in a classical graphing method.
63. The method of any one of claims 1 through 32, wherein the integrity check value is implemented with a technology selected from the group consisting of: CRC (cyclic redundancy check), hash, MD5, SHA-I, SHA-2, HMAC (keyed- hash message authentication code), partial-hash-value and parity checks.
64. A method as claimed in any one of the preceding claims wherein the data request is a request in a query language.
65. A method as claimed in claim 64 wherein the query language is a structured query language.
66. A method as claimed in any one of the preceding claims wherein the obfuscation includes annulment of output values.
67. A method as claimed in claim 66 wherein output values are annulled if the number of output values is below a prescribed threshold.
68. A method as claimed in any one of the preceding claims wherein the frequency of each output value is determined at a first level of granularity and converted to a second level of granularity, coarser than the first level of granularity, wherein the data is converted from the first level of granularity to the second level of granularity according to a rule other than the simple proximity of the data to the nearest value at the second level of granularity.
69. A method of representing data having a first level of granularity at a second level of granularity, coarser than the first level of granularity, wherein the data is converted from the first level of granularity to the second level of granularity according to a rule other than the simple proximity of the data to the nearest value at the second level of granularity.
70. A method of obfuscating data, as claimed in Claim 69, that is a mapping based method based on the last digit.
71. A method of obfuscating data, as claimed in Claim 70, using random rounding based upon an odd number being rounded up and an even number being rounded down.
72. A method of obfuscating data, as claimed in Claim 70, using random rounding based upon an even number being rounded up and an odd number being rounded down,
73. A method of obfuscating data, as claimed in Claim 69, using a pseudo random number seeding function which will seed a random number generator with input values, with its output either rounded up or rounded down, or results shifted by a rounded amount up or down
74. A method of obfuscating data, as claimed in Claim 69, using a pseudo random number seeding function which will seed a random number generator with input values, with its output either shifted up or down a number of granularity factors.
75. A method of obfuscating data as claimed in any one of claims 69 to 75 including obfuscating additional columns of data based on the granularity movement of previously obfuscated data.
76. Data produced by the method of any one of the preceding claims.
77. Printed media embodying data produced by the method of any one of claims 1 to 75.
78. Storage media embodying the data produced by the method of any one of claims 1 to 75.
79. Software for implementing the method for any of claims 1 to 63.
80. Hardware for implementing the method for any claims 1 to 75.
81. An obfuscation circuit adapted to interface between a database and a user interface which obfuscates data values returned from a database in response to a user query operating in accordance with the method of any one of claims 1 to 75.
82. An obfuscation system comprising a database, an obfuscation circuit and a user interface wherein the obfuscation circuit operates according to the method of any one of claims 1 to 75.
83. A method of obfuscating data comprising: running an unconstrained query on data in a secret database to produce output data; and obfuscating the output data using a repeatable obfuscation function to return obfuscated data in response to the query.
84. An obfuscation system comprising: an input interface; a query engine for receiving input data from the input interface; memory interfaced to the query engine for storing data and supplying data to the query engine in response to a data request; an output interface configured to receive output data from the query engine; and an obfuscation engine for obfuscating data retrieved from memory and obfuscating it in a repeatable manner prior to supplying it to the output interface.
85. An obfuscation system as claimed in claim 84 wherein the obfuscation engine operates according to the method of any one of claims 1 to 75.
86. An obfuscation system as claimed in claim 84 or claim 85 including a request filter to filter out requests of unallowed frequency.
87. An obfuscation system as claimed in claim 86 wherein the filter is a high frequency filter configured to hamper attempts to determine the true values from the obfuscated values.
88. An obfuscation system as claimed in any one of claims 84 to 87 wherein the obfuscation engine is a separate hardware device.
89. An obfuscation system as claimed in any one of claims 84 to 87 wherein the obfuscation engine is distributed amongst a plurality of nodes of a clustered computer system.
90. An obfuscation system as claimed in any one of claims 84 to 89 including a filter for filtering out results below a defined threshold.
91. An obfuscation system as claimed in any one of claims 84 to 90 including an output filter for filtering out results of unallowed frequency.
92. An obfuscation system as claimed in any one of claims 84 to 91 including an output filter for distributing results based on filtering.
93. An obfuscation system as claimed in any one of claims 84 to 92 including an output filter for storing results in a plurality of data storage devices based on filtering.
94. An obfuscation system as claimed in claim 92 or 93 wherein the output filter includes a programmable streaming data processor.
PCT/NZ2009/000077 2008-05-12 2009-05-12 A data obfuscation system, method, and computer implementation of data obfuscation for secret databases WO2009139650A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US12/992,513 US9305180B2 (en) 2008-05-12 2009-05-12 Data obfuscation system, method, and computer implementation of data obfuscation for secret databases
US15/183,449 US20160365974A1 (en) 2008-05-12 2016-06-15 Data obfuscation system, method, and computer implementation of data obfuscation for secret databases

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US5261308P 2008-05-12 2008-05-12
US61/052,613 2008-05-12

Related Child Applications (2)

Application Number Title Priority Date Filing Date
US12/992,513 A-371-Of-International US9305180B2 (en) 2008-05-12 2009-05-12 Data obfuscation system, method, and computer implementation of data obfuscation for secret databases
US201615049992A Continuation 2008-05-12 2016-02-22

Publications (1)

Publication Number Publication Date
WO2009139650A1 true WO2009139650A1 (en) 2009-11-19

Family

ID=41318887

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/NZ2009/000077 WO2009139650A1 (en) 2008-05-12 2009-05-12 A data obfuscation system, method, and computer implementation of data obfuscation for secret databases

Country Status (2)

Country Link
US (2) US9305180B2 (en)
WO (1) WO2009139650A1 (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013072428A1 (en) * 2011-11-17 2013-05-23 Good Technology Corporation Methods and apparatus for anonymising user data by aggregation
WO2014186771A1 (en) 2013-05-16 2014-11-20 Nfluence Media, Inc. Privacy sensitive persona management tools
US11176264B2 (en) 2019-08-20 2021-11-16 Bank Of America Corporation Data access control using data block level decryption
US20220253464A1 (en) * 2021-02-10 2022-08-11 Bank Of America Corporation System for identification of obfuscated electronic data through placeholder indicators
US11580249B2 (en) 2021-02-10 2023-02-14 Bank Of America Corporation System for implementing multi-dimensional data obfuscation
US20230107191A1 (en) * 2021-10-05 2023-04-06 Matthew Wong Data obfuscation platform for improving data security of preprocessing analysis by third parties
US11741248B2 (en) 2019-08-20 2023-08-29 Bank Of America Corporation Data access control using data block level encryption

Families Citing this family (46)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2189925A3 (en) * 2008-11-25 2015-10-14 SafeNet, Inc. Database obfuscation system and method
US10102398B2 (en) * 2009-06-01 2018-10-16 Ab Initio Technology Llc Generating obfuscated data
CA2764390C (en) * 2009-06-10 2019-02-26 Ab Initio Technology Llc Generating test data
US8626749B1 (en) * 2010-04-21 2014-01-07 Stan Trepetin System and method of analyzing encrypted data in a database in near real-time
US9946810B1 (en) 2010-04-21 2018-04-17 Stan Trepetin Mathematical method for performing homomorphic operations
US20110264631A1 (en) * 2010-04-21 2011-10-27 Dataguise Inc. Method and system for de-identification of data
US8626778B2 (en) 2010-07-23 2014-01-07 Oracle International Corporation System and method for conversion of JMS message data into database transactions for application to multiple heterogeneous databases
US8510270B2 (en) 2010-07-27 2013-08-13 Oracle International Corporation MYSQL database heterogeneous log based replication
US9298878B2 (en) * 2010-07-29 2016-03-29 Oracle International Corporation System and method for real-time transactional data obfuscation
JP5945490B2 (en) * 2011-10-11 2016-07-05 日本電信電話株式会社 Database disturbance parameter determining apparatus, method and program, and database disturbance system
GB201206203D0 (en) * 2012-04-05 2012-05-23 Dunbridge Ltd Authentication in computer networks
US20150314003A2 (en) 2012-08-09 2015-11-05 Adocia Injectable solution at ph 7 comprising at least one basal insulin the isoelectric point of which is between 5.8 and 8.5 and a hydrophobized anionic polymer
US10146958B2 (en) * 2013-03-14 2018-12-04 Mitsubishi Electric Research Laboratories, Inc. Privacy preserving statistical analysis on distributed databases
US10515231B2 (en) * 2013-11-08 2019-12-24 Symcor Inc. Method of obfuscating relationships between data in database tables
SG11201604364UA (en) 2013-12-18 2016-07-28 Ab Initio Technology Llc Data generation
KR101813481B1 (en) * 2013-12-23 2017-12-29 인텔 코포레이션 Apparatus, storage medium and method for anonymizing user data
US9502003B2 (en) 2014-01-05 2016-11-22 Spatial Cam Llc Apparatus and methods to display a modified image
US9154506B1 (en) * 2014-03-20 2015-10-06 Wipro Limited System and method for secure data generation and transmission
US9451578B2 (en) 2014-06-03 2016-09-20 Intel Corporation Temporal and spatial bounding of personal information
US9390282B2 (en) 2014-09-03 2016-07-12 Microsoft Technology Licensing, Llc Outsourcing document-transformation tasks while protecting sensitive information
US9854436B2 (en) 2014-09-25 2017-12-26 Intel Corporation Location and proximity beacon technology to enhance privacy and security
EP3128479A1 (en) * 2015-08-06 2017-02-08 Tata Consultancy Services Limited Methods and systems for transaction processing
CN106909811B (en) * 2015-12-23 2020-07-03 腾讯科技(深圳)有限公司 Method and device for processing user identification
EP3485436A4 (en) * 2016-07-18 2020-04-01 Nantomics, LLC Distributed machine learning systems, apparatus, and methods
US10915661B2 (en) 2016-11-03 2021-02-09 International Business Machines Corporation System and method for cognitive agent-based web search obfuscation
US10740418B2 (en) * 2016-11-03 2020-08-11 International Business Machines Corporation System and method for monitoring user searches to obfuscate web searches by using emulated user profiles
US10885132B2 (en) * 2016-11-03 2021-01-05 International Business Machines Corporation System and method for web search obfuscation using emulated user profiles
US10929481B2 (en) 2016-11-03 2021-02-23 International Business Machines Corporation System and method for cognitive agent-based user search behavior modeling
US10445527B2 (en) * 2016-12-21 2019-10-15 Sap Se Differential privacy and outlier detection within a non-interactive model
US10783137B2 (en) * 2017-03-10 2020-09-22 Experian Health, Inc. Identity management
US11194829B2 (en) 2017-03-24 2021-12-07 Experian Health, Inc. Methods and system for entity matching
US10579828B2 (en) 2017-08-01 2020-03-03 International Business Machines Corporation Method and system to prevent inference of personal information using pattern neutralization techniques
WO2019078374A1 (en) * 2017-10-16 2019-04-25 주식회사 센티언스 Data security maintenance method for data analysis use
US11645261B2 (en) 2018-04-27 2023-05-09 Oracle International Corporation System and method for heterogeneous database replication from a remote server
US20200012890A1 (en) * 2018-07-06 2020-01-09 Capital One Services, Llc Systems and methods for data stream simulation
US11176272B2 (en) 2018-09-12 2021-11-16 The Nielsen Company (Us), Llc Methods, systems, articles of manufacture and apparatus to privatize consumer data
US11764940B2 (en) 2019-01-10 2023-09-19 Duality Technologies, Inc. Secure search of secret data in a semi-trusted environment using homomorphic encryption
CN109889292B (en) * 2019-01-29 2020-10-02 同济大学 Time deviation calibration method in three-layer correlation audit
US20220215129A1 (en) * 2019-05-21 2022-07-07 Nippon Telegraph And Telephone Corporation Information processing apparatus, information processing method and program
US11334408B2 (en) * 2020-01-08 2022-05-17 Bank Of America Corporation Big data distributed processing and secure data transferring with fault handling
US11314874B2 (en) * 2020-01-08 2022-04-26 Bank Of America Corporation Big data distributed processing and secure data transferring with resource allocation and rebate
US11321430B2 (en) * 2020-01-08 2022-05-03 Bank Of America Corporation Big data distributed processing and secure data transferring with obfuscation
US11379603B2 (en) * 2020-01-08 2022-07-05 Bank Of America Corporation Big data distributed processing and secure data transferring with fallback control
US11363029B2 (en) * 2020-01-08 2022-06-14 Bank Of America Corporation Big data distributed processing and secure data transferring with hyper fencing
US11706381B2 (en) * 2021-05-24 2023-07-18 Getac Technology Corporation Selective obfuscation of objects in media content
CN116089661A (en) * 2021-11-05 2023-05-09 北京字节跳动网络技术有限公司 Method and device for controlling data access

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008039565A2 (en) * 2006-09-27 2008-04-03 Direct Computer Resources, Inc. System and method for obfuscation of data across an enterprise
GB2444338A (en) * 2006-12-01 2008-06-04 David Irvine Granular accessibility to data in a distributed and/or corporate network

Family Cites Families (31)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4754410A (en) * 1986-02-06 1988-06-28 Westinghouse Electric Corp. Automated rule based process control method with feedback and apparatus therefor
US5454101A (en) * 1992-09-15 1995-09-26 Universal Firmware Industries, Ltd. Data storage system with set lists which contain elements associated with parents for defining a logical hierarchy and general record pointers identifying specific data sets
US5535128A (en) * 1995-02-09 1996-07-09 The United States Of America As Represented By The Secretary Of The Air Force Hierarchical feedback control of pulsed laser deposition
US5963642A (en) * 1996-12-30 1999-10-05 Goldstein; Benjamin D. Method and apparatus for secure storage of data
US20010011276A1 (en) * 1997-05-07 2001-08-02 Robert T. Durst Jr. Scanner enhanced remote control unit and system for automatically linking to on-line resources
US5966537A (en) * 1997-05-28 1999-10-12 Sun Microsystems, Inc. Method and apparatus for dynamically optimizing an executable computer program using input data
WO1999001815A1 (en) * 1997-06-09 1999-01-14 Intertrust, Incorporated Obfuscation techniques for enhancing software security
US6643775B1 (en) * 1997-12-05 2003-11-04 Jamama, Llc Use of code obfuscation to inhibit generation of non-use-restricted versions of copy protected software applications
US6374402B1 (en) * 1998-11-16 2002-04-16 Into Networks, Inc. Method and apparatus for installation abstraction in a secure content delivery system
US6643686B1 (en) * 1998-12-18 2003-11-04 At&T Corp. System and method for counteracting message filtering
US7080257B1 (en) * 2000-03-27 2006-07-18 Microsoft Corporation Protecting digital goods using oblivious checking
US6297095B1 (en) * 2000-06-16 2001-10-02 Motorola, Inc. Memory device that includes passivated nanoclusters and method for manufacture
EP1410140B1 (en) * 2001-03-28 2017-02-15 NDS Limited Digital rights management system and method
CA2348355A1 (en) * 2001-05-24 2002-11-24 Cloakware Corporation General scheme of using encodings in computations
JP2003280754A (en) * 2002-03-25 2003-10-02 Nec Corp Hidden source program, source program converting method and device and source converting program
US7596277B2 (en) * 2002-04-09 2009-09-29 Senthil Govindaswamy Apparatus and method for detecting error in a digital image
US7200757B1 (en) * 2002-05-13 2007-04-03 University Of Kentucky Research Foundation Data shuffling procedure for masking data
US7124445B2 (en) * 2002-06-21 2006-10-17 Pace Anti-Piracy, Inc. Protecting software from unauthorized use by converting source code modules to byte codes
KR101044796B1 (en) * 2004-01-13 2011-06-29 삼성전자주식회사 Portable data storage apparatus
US7743069B2 (en) * 2004-09-03 2010-06-22 Sybase, Inc. Database system providing SQL extensions for automated encryption and decryption of column data
US7672967B2 (en) * 2005-02-07 2010-03-02 Microsoft Corporation Method and system for obfuscating data structures by deterministic natural data substitution
GB0514492D0 (en) * 2005-07-14 2005-08-17 Ntnu Technology Transfer As Secure media streaming
JP4918544B2 (en) * 2005-10-28 2012-04-18 パナソニック株式会社 Obfuscation evaluation method, obfuscation evaluation apparatus, obfuscation evaluation program, storage medium, and integrated circuit
US20080059590A1 (en) * 2006-09-05 2008-03-06 Ecole Polytechnique Federale De Lausanne (Epfl) Method to filter electronic messages in a message processing system
US8001607B2 (en) * 2006-09-27 2011-08-16 Direct Computer Resources, Inc. System and method for obfuscation of data across an enterprise
US7724918B2 (en) * 2006-11-22 2010-05-25 International Business Machines Corporation Data obfuscation of text data using entity detection and replacement
US7975308B1 (en) * 2007-09-28 2011-07-05 Symantec Corporation Method and apparatus to secure user confidential data from untrusted browser extensions
US20090132419A1 (en) * 2007-11-15 2009-05-21 Garland Grammer Obfuscating sensitive data while preserving data usability
US8094813B2 (en) * 2008-09-02 2012-01-10 Apple Inc. System and method for modulus obfuscation
US8434061B2 (en) * 2008-06-06 2013-04-30 Apple Inc. System and method for array obfuscation
US8140809B2 (en) * 2009-05-29 2012-03-20 Apple Inc. Computer implemented masked representation of data tables

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008039565A2 (en) * 2006-09-27 2008-04-03 Direct Computer Resources, Inc. System and method for obfuscation of data across an enterprise
GB2444338A (en) * 2006-12-01 2008-06-04 David Irvine Granular accessibility to data in a distributed and/or corporate network

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9489530B2 (en) 2011-11-17 2016-11-08 Good Technology Corporation Methods and apparatus for anonymising user data by aggregation
CN103946857A (en) * 2011-11-17 2014-07-23 良好科技公司 Methods and apparatus for anonymising user data by aggregation
WO2013072428A1 (en) * 2011-11-17 2013-05-23 Good Technology Corporation Methods and apparatus for anonymising user data by aggregation
US10346883B2 (en) 2013-05-16 2019-07-09 autoGraph, Inc. Privacy sensitive persona management tools
EP2997505A4 (en) * 2013-05-16 2016-12-07 Nfluence Media Inc Privacy sensitive persona management tools
US9875490B2 (en) 2013-05-16 2018-01-23 autoGraph, Inc. Privacy sensitive persona management tools
WO2014186771A1 (en) 2013-05-16 2014-11-20 Nfluence Media, Inc. Privacy sensitive persona management tools
US11176264B2 (en) 2019-08-20 2021-11-16 Bank Of America Corporation Data access control using data block level decryption
US11741248B2 (en) 2019-08-20 2023-08-29 Bank Of America Corporation Data access control using data block level encryption
US20220253464A1 (en) * 2021-02-10 2022-08-11 Bank Of America Corporation System for identification of obfuscated electronic data through placeholder indicators
US11580249B2 (en) 2021-02-10 2023-02-14 Bank Of America Corporation System for implementing multi-dimensional data obfuscation
US11907268B2 (en) * 2021-02-10 2024-02-20 Bank Of America Corporation System for identification of obfuscated electronic data through placeholder indicators
US20230107191A1 (en) * 2021-10-05 2023-04-06 Matthew Wong Data obfuscation platform for improving data security of preprocessing analysis by third parties

Also Published As

Publication number Publication date
US20160365974A1 (en) 2016-12-15
US9305180B2 (en) 2016-04-05
US20110179011A1 (en) 2011-07-21

Similar Documents

Publication Publication Date Title
US9305180B2 (en) Data obfuscation system, method, and computer implementation of data obfuscation for secret databases
US11544395B2 (en) System and method for real-time transactional data obfuscation
US10645109B1 (en) System, method, and computer program for detection of anomalous user network activity based on multiple data sources
US11893133B2 (en) Budget tracking in a differentially private database system
Li et al. Enabling multilevel trust in privacy preserving data mining
Liew et al. A data distortion by probability distribution
CN106940777B (en) Identity information privacy protection method based on sensitive information measurement
Jiang et al. Differential-private data publishing through component analysis
Zhang et al. A new scheme on privacy-preserving data classification
Aggarwal On unifying privacy and uncertain data models
Lin et al. Privacy-preserving outsourcing support vector machines with random transformation
EP3736723A1 (en) Differentially private budget tracking using renyi divergence
Yuvaraj et al. Data privacy preservation and trade-off balance between privacy and utility using deep adaptive clustering and elliptic curve digital signature algorithm
Caruccio et al. GDPR compliant information confidentiality preservation in big data processing
CN111737703A (en) Method for realizing data lake security based on dynamic data desensitization technology
Rebollo-Monedero et al. p-Probabilistic k-anonymous microaggregation for the anonymization of surveys with uncertain participation
Lin et al. Secure support vector machines outsourcing with random linear transformation
Yang et al. A privacy-preserving data obfuscation scheme used in data statistics and data mining
Wang et al. Medical privacy protection based on granular computing
Hong et al. Augmented Rotation‐Based Transformation for Privacy‐Preserving Data Clustering
Truta et al. Assessing global disclosure risk in masked microdata
Turkanovic et al. Inference attacks and control on database structures
Li et al. Preventing interval-based inference by random data perturbation
Aissaoui Proportional differential privacy (PDP): a new approach for differentially private histogram release based on buckets densities
Sun et al. On the identity anonymization of high‐dimensional rating data

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 09746826

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

WWE Wipo information: entry into national phase

Ref document number: 12992513

Country of ref document: US

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC

122 Ep: pct application non-entry in european phase

Ref document number: 09746826

Country of ref document: EP

Kind code of ref document: A1