Search Options
| 34 results Expect to know better when you know more - Less Wronglesswrong.com/lw/.../expect_to_know_better_when_you_know_more/ - Cached Apr 21, 2016 ... This is the Kullback–Leibler divergence of P(e|~H) from P(e|H), and hence is non-negative. Thus log[E(P(e|H)/P(e|~H))] ≥ 0, and hence. The Truth Points to Itself, Part I - Less Wronglesswrong.com/lw/ct1/the_truth_points_to_itself_part_i/ - Cached Jun 1, 2012 ... I wonder if future posts might compare this to other possible measures of interestingness, such as Gelman's Kullback-Leibler divergence ... The Most Important Thing You Learned - Less Wronglesswrong.com/lw/9/the_most_important_thing_you_learned/ - Cached Feb 27, 2009 ... ... (extractable work) of a system is equal to its Kullback-Leibler divergence (a generalization of informational entropy) from its environment. Overview for guest4 - Less Wronglesswrong.com/user/guest4/ Nick Hay - IIRC the minus-log probability of an outcome is usually called "surprisal" or "self-information". The Shannon entropy of a random variable is just the ... How to Be Oversurprised - Less Wronglesswrong.com/lw/g6b/how_to_be_oversurprised/ - Cached Jan 7, 2013 ... In that case, there's essentially one consistent notion of surprise (also called: self-information, surprisal). The amount of surprise that a random ... What's wrong with this picture? - Less Wronglesswrong.com/lw/n8h/whats_wrong_with_this_picture/ - Cached Jan 28, 2016 ... ... prior and posterior using the Kullback-Leibler divergence and use it ... you're using self-information/surprisal interchangeably with surprise. The Moral Status of Independent Identical Copies - Less Wronglesswrong.com/.../the_moral_status_of_independent_identical_copies/ - Cached Nov 30, 2009 ... 1 / [ bits needed to specify moral principle x Kullback-Leibler divergence(distribution of actions governed by the moral principle || distribution of ... Overview for anon15 - Less Wronglesswrong.com/user/anon15/ - Cached If so, you could normalize this to a probability measure, and then compute its Kullback-Leibler divergence to obtain a measure of information gain. Permalink. Overview for alex_zag_al - Less Wronglesswrong.com/user/alex_zag_al/ - Cached Yes... if a theory adds to the surprisal of an experimental result, then the experimental result adds precisely the same amount of the surprisal of the theory. Optimization - Less Wronglesswrong.com/lw/tx/optimization/ - Cached Sep 13, 2008 ... If so, you could normalize this to a probability measure, and then compute its Kullback-Leibler divergence to obtain a measure of information ... Ontological Crises in Artificial Agents' Value Systems by Peter de ...lesswrong.com/lw/5si/ontological_crises_in_artificial_agents_value/ - Cached May 21, 2011 ... Like the use of the Kullback-Leibler divergence. Why that, specifically - is it just that obvious and desirable? It would seem to have some not ... Is there a way to quantify the relationship between Person1's Map ...lesswrong.com/lw/3ol/is_there_a_way_to_quantify_the_relationship/ - Cached Jan 9, 2011 ... Have you heard of the Kullback-Leibler divergence? One way of thinking about it is that it quantifies the amount you learn about one random ... Rationality Quotes: October 2009 - Less Wronglesswrong.com/lw/1co/rationality_quotes_october_2009/ - Cached Oct 22, 2009 ... In information theory, there's the concept of the surprisal, which is the logarithm ... The higher the surprisal, the greater the information content. Articles Tagged 'math' - Less Wrong Discussionlesswrong.com/r/discussion/tag/math/ Apr 21, 2016 ... This is the Kullback–Leibler divergence of P(e|~H) from P(e|H), and hence is non-negative. Thus log[E(P(e|H)/P(e|~H))] ≥ 0, and hence. Rationality quotes: August 2010 - Less Wronglesswrong.com/lw/2jj/rationality_quotes_august_2010/ - Cached Aug 3, 2010 ... (This value is called the "surprisal" or "self-information".) ... to have a good experiment, you want to maximize the "expected surprisal" (i.e. sum ... Dissenting Views - Less Wronglesswrong.com/lw/gi/dissenting_views/ - Cached May 26, 2009 ... ... posts (and sometimes comments) introduce jargon (i.e. Kullback-Leibler distance, utility function, priors etc.) for not very substantial reasons. Is Google Paperclipping the Web? The Perils of Optimization by ...lesswrong.com/lw/.../is_google_paperclipping_the_web_the_perils_of/ - Cached May 10, 2010 ... Thank for this report on your surprisal, and what would help, and the assumptions behind discussion of paperclip maximizers. Parent; Reply ... Liked by Peterdjones - Less Wronglesswrong.com/user/Peterdjones/liked/ - Cached In that case, there's essentially one consistent notion of surprise (also called: self-information, surprisal). The amount of surprise that a random variable X has ... Articles Tagged 'reinforcement_learning' - LessWronglesswrong.com/r/lesswrong/tag/reinforcement_learning/ Jan 17, 2013 ... In that case, there's essentially one consistent notion of surprise (also called: self-information, surprisal). The amount of surprise that a random ... The mathematics of reduced impact: help needed - Less Wronglesswrong.com/.../the_mathematics_of_reduced_impact_help_needed/ - Cached Feb 16, 2012 ... This is somewhat similar to the Kullback-Leibler divergence, but that measure requires matching up the worlds for the two distributions, and ... Open Thread February 25 - March 3 - Less Wronglesswrong.com/lw/jr8/open_thread_february_25_march_3/ - Cached Feb 25, 2014 ... https://encrypted.google.com/search?num=100&q=Kullback-Leibler%20OR%20surprisal%20site%3Alesswrong.com. Speaking of the LW wiki, ... Rationality Case Study - Ad-36 - Less Wronglesswrong.com/lw/2qm/rationality_case_study_ad36/ - Cached Sep 22, 2010 ... ... proper reference class to calibrate my estimate by, but I would tentatively say that "Ad-36 causes obesity in humans" has a surprisal of 10 bits. Articles Tagged 'bayesian' - Less Wrong Discussionlesswrong.com/r/discussion/tag/bayesian/ - Cached Sep 28, 2016 ... This is the Kullback–Leibler divergence of P(e|~H) from P(e|H), and hence is non-negative. Thus log[E(P(e|H)/P(e|~H))] ≥ 0, and hence. Submitted by Peter_de_Blanc - Less Wronglesswrong.com/user/Peter_de_Blanc/submitted/ Jul 15, 2011 ... As an epistemic rationalist, I would say that 1/2 is a better approximation than 0, because the Kullback-Leibler Divergence is (about) 1 bit for the ... In order to show you the most relevant results, we have omitted some entries very similar to the 34 already displayed. | ||