Praise for How to Measure Anything: Finding the Value of Intangibles in Business "I love this book. Douglas Hubbard helps us create a path to know the answer to almost any question in business, in science, or in life . . . Hubbard helps us by showing us that when we seek metrics to solve problems, we are really trying to know something better than we know it now. How to Me Praise for How to Measure Anything: Finding the Value of Intangibles in Business "I love this book. Douglas Hubbard helps us create a path to know the answer to almost any question in business, in science, or in life . . . Hubbard helps us by showing us that when we seek metrics to solve problems, we are really trying to know something better than we know it now. How to Measure Anything provides just the tools most of us need to measure anything better, to gain that insight, to make progress, and to succeed." -Peter Tippett, PhD, M.D. Chief Technology Officer at CyberTrust and inventor of the first antivirus software "Doug Hubbard has provided an easy-to-read, demystifying explanation of how managers can inform themselves to make less risky, more profitable business decisions. We encourage our clients to try his powerful, practical techniques." -Peter Schay EVP and COO of The Advisory Council "As a reader you soon realize that actually everything can be measured while learning how to measure only what matters. This book cuts through conventional cliches and business rhetoric and offers practical steps to using measurements as a tool for better decision making. Hubbard bridges the gaps to make college statistics relevant and valuable for business decisions." -Ray Gilbert EVP Lucent "This book is remarkable in its range of measurement applications and its clarity of style. A must-read for every professional who has ever exclaimed, 'Sure, that concept is important, but can we measure it?'" -Dr. Jack Stenner Cofounder and CEO of MetraMetrics, Inc.

# How to Measure Anything: Finding the Value of "Intangibles" in Business

Praise for How to Measure Anything: Finding the Value of Intangibles in Business "I love this book. Douglas Hubbard helps us create a path to know the answer to almost any question in business, in science, or in life . . . Hubbard helps us by showing us that when we seek metrics to solve problems, we are really trying to know something better than we know it now. How to Me Praise for How to Measure Anything: Finding the Value of Intangibles in Business "I love this book. Douglas Hubbard helps us create a path to know the answer to almost any question in business, in science, or in life . . . Hubbard helps us by showing us that when we seek metrics to solve problems, we are really trying to know something better than we know it now. How to Measure Anything provides just the tools most of us need to measure anything better, to gain that insight, to make progress, and to succeed." -Peter Tippett, PhD, M.D. Chief Technology Officer at CyberTrust and inventor of the first antivirus software "Doug Hubbard has provided an easy-to-read, demystifying explanation of how managers can inform themselves to make less risky, more profitable business decisions. We encourage our clients to try his powerful, practical techniques." -Peter Schay EVP and COO of The Advisory Council "As a reader you soon realize that actually everything can be measured while learning how to measure only what matters. This book cuts through conventional cliches and business rhetoric and offers practical steps to using measurements as a tool for better decision making. Hubbard bridges the gaps to make college statistics relevant and valuable for business decisions." -Ray Gilbert EVP Lucent "This book is remarkable in its range of measurement applications and its clarity of style. A must-read for every professional who has ever exclaimed, 'Sure, that concept is important, but can we measure it?'" -Dr. Jack Stenner Cofounder and CEO of MetraMetrics, Inc.

Compare

5out of 5Yevgeniy Brikman–As an engineer, this book makes me happy. A great discussion of how to break *any* problem down into quantifiable metrics, how to figure out which of those metrics is valuable, and how to measure them. The book is fairly actionable, there is a complementary website with lots of handy excel tools, and there are plenty of examples to help you along. The only downside is that this is largely a stats book in disguise, so some parts are fairly dry and a the difficulty level jumps around a little bit. As an engineer, this book makes me happy. A great discussion of how to break *any* problem down into quantifiable metrics, how to figure out which of those metrics is valuable, and how to measure them. The book is fairly actionable, there is a complementary website with lots of handy excel tools, and there are plenty of examples to help you along. The only downside is that this is largely a stats book in disguise, so some parts are fairly dry and a the difficulty level jumps around a little bit. If you make important decisions, especially in business, this book is for you. Some great quotes: Anything can be measured. If a thing can be observed in any way at all, it lends itself to some type of measurement method. No matter how “fuzzy” the measurement is, it’s still a measurement if it tells you more than you knew before. And those very things most likely to be seen as immeasurable are, virtually always, solved by relatively simple measurement methods. Measurement: a quantitatively expressed reduction of uncertainty based on one or more observations. So a measurement doesn’t have to eliminate uncertainty after all. A mere _reduction_ in uncertainty counts as a measurement and possibly can be worth much more than the cost of the measurement. A problem well stated is a problem half solved. —Charles Kettering (1876–1958) The clarification chain is just a short series of connections that should bring us from thinking of something as an intangible to thinking of it as a tangible. First, we recognize that if X is something that we care about, then X, by definition, must be detectable in some way. How could we care about things like “quality,” “risk,” “security,” or “public image” if these things were totally undetectable, in any way, directly or indirectly? If we have reason to care about some unknown quantity, it is because we think it corresponds to desirable or undesirable results in some way. Second, if this thing is detectable, then it must be detectable in some amount. If you can observe a thing at all, you can observe more of it or less of it. Once we accept that much, the final step is perhaps the easiest. If we can observe it in some amount, then it must be measurable. Rule of five: There is a 93.75% chance that the median of a population is between the smallest and largest values in any random sample of five from that population. An important lesson comes from the origin of the word experiment. “Ex- periment” comes from the Latin ex-, meaning “of/from,” and periri, mean- ing “try/attempt.” It means, in other words, to get something by trying. The statistician David Moore, the 1998 president of the American Statistical Association, goes so far as to say: “If you don’t know what to measure, measure anyway. You’ll learn what to measure.” Four useful measurement assumptions: 1. Your problem is not as unique as you think. 2. You have more data than you think. 3. You need less stated that you think. 4. And adequate amount of new data is more accessible than you think. Don’t assume that the only way to reduce your uncertainty is to use an impractically sophisticated method. Are you trying to get published in a peer-reviewed journal, or are you just trying to reduce your uncertainty about a real-life business decision? Think of measurement as iterative. Start measuring it. You can always adjust the method based on initial findings. In business cases, most of the variables have an "information value" at or near zero. But usually at least some variables have an information value that is so high that some deliberate measurement is easily justified. While there are certainly variables that do not justify measurement, a persistent misconception is that unless a measurement meets an arbitrary standard (e.g., adequate for publication in an academic journal or meets generally accepted accounting standards), it has no value. This is a slight oversimplification, but what really makes a measurement of high value is a lot of uncertainty combined with a high cost of being wrong. Whether it meets some other standard is irrelevant. When people say “You can prove anything with statistics,” they probably don’t really mean “statistics,” they just mean broadly the use of numbers (especially, for some reason, percentages). And they really don’t mean “anything” or “prove.” What they really mean is that “numbers can be used to confuse people, especially the gullible ones lacking basic skills with numbers.” With this, I completely agree but it is an entirely different claim. The fact is that the preference for ignorance over even marginal reductions in ignorance is never the moral high ground. If decisions are made under a self-imposed state of higher uncertainty, policy makers (or even businesses like, say, airplane manufacturers) are betting on our lives with a higher chance of erroneous allocation of limited resources. In measurement, as in many other human endeavors, ignorance is not only wasteful but can be dangerous. If we can’t identify a decision that could be affected by a proposed measurement and how it could change those decisions, then the measurement simply has no value. The lack of having an exact number is not the same as knowing nothing. The McNamara Fallacy: The first step is to measure whatever can be easily measured. This is OK as far as it goes. The second step is to disregard that which can’t easily be measured or to give it an arbitrary quantitative value. This is artificial and misleading. The third step is to presume that what can’t be measured easily isn’t important. This is blindness. The fourth step is to say that what can’t easily be measured really doesn’t exist. This is suicide. First, we know that the early part of any measurement usually is the high-value part. Don’t attempt a massive study to measure something if you have a lot of uncertainty about it now. Measure a little bit, remove some uncertainty, and evaluate what you have learned. Were you surprised? Is further measurement still necessary? Did what you learned in the beginning of the measurement give you some ideas about how to change the method? Iterative measurement gives you the most flexibility and the best bang for the buck. This point might be disconcerting to some who would like more certainty in their world, but everything we know from “experience” is just a sample. We didn’t actually experience everything; we experienced some things and we extrapolated from there. That is all we get—fleeting glimpses of a mostly unobserved world from which we draw conclusions about all the stuff we didn’t see. Yet people seem to feel confident in the conclusions they draw from limited samples. The reason they feel this way is because experience tells them sampling often works. (Of course, that experience, too, is based on a sample.) Anything you need to quantify can be measured in some way that is superior to not measuring it at all. —Gilb’s Law

5out of 5Jurgen Appelo–297 references to risk, and only 29 references to opportunity. No mention of unknown unknowns (or black swans), and no mention of the observer effect (goodhart's law). A great book, teaching you all about metrics, as long as you ignore complexity. 297 references to risk, and only 29 references to opportunity. No mention of unknown unknowns (or black swans), and no mention of the observer effect (goodhart's law). A great book, teaching you all about metrics, as long as you ignore complexity.

4out of 5Takuro Ishikawa–The most important thing I learned from this book: “A measurement is a set of observations that reduce uncertainty where the result is expressed as a quantity.” Finally! Someone has clearly explained that measurements are all approximations. Very often in social research, I have to spend a lot of time explaining that metrics don’t need to be exact to be useful and reliable. Hopefully, this book will help me shorten those conversations.

4out of 5Nils–An OK popularization of measurement techniques. But it downplays the key issue—which is data quality challenges, of which there are at least two types. The first is the "moneyball" type: a phenomenon where we know intuitively that there are important differences in measurable outcomes but we lack statistically significant explanations. The challenge here is to find things to measure that are consistently revealing of the phenomenon you are ultimately interested in measuring (say team wins). Maki An OK popularization of measurement techniques. But it downplays the key issue—which is data quality challenges, of which there are at least two types. The first is the "moneyball" type: a phenomenon where we know intuitively that there are important differences in measurable outcomes but we lack statistically significant explanations. The challenge here is to find things to measure that are consistently revealing of the phenomenon you are ultimately interested in measuring (say team wins). Making it harder is that sometimes you need to build a supercollider in order to measure the phenomenon in question, and for many reasons that may not always be feasible. Data collection is expensive, in many ways, not least socially: new forms of measurement of social activities (including business activities) threaten those who benefit from status quo. The second data quality challenge is more insidious, the "deviant globalization" type: we have the data, or some data, but it is hopelessly and often intentionally corrupted or compromised, since there are actors who have an active interest in obscuring measurement. This is true about almost all information related to morally questionable activities, for example, from sex to drugs to theft. But it's not just there: any sales manager trying to accurately gauge the size of his reps' pipeline is intimate with the problem of trying to extract accurate data. In sum, the book is fine on the technique side, but naive about what we may call the social epistemologies.

5out of 5Martin Klubeck–I really like this book. Hubbard not only champions the belief that anything can be measured, he gives you the means (the understanding of how) to get it done. I have used his book on numerous occasions when tackling some difficult data collection efforts. Hubbard's taxonomy and mine don't fully jive, but that's a minor point; I found much more to like than not. I like to highlight and make notes in good books...this book is full of both. I especially like one of his "useful measurement assumpti I really like this book. Hubbard not only champions the belief that anything can be measured, he gives you the means (the understanding of how) to get it done. I have used his book on numerous occasions when tackling some difficult data collection efforts. Hubbard's taxonomy and mine don't fully jive, but that's a minor point; I found much more to like than not. I like to highlight and make notes in good books...this book is full of both. I especially like one of his "useful measurement assumptions." I think it sums up the book nicely: "There is a useful measurement that is much simpler than you think." This book helps you find the simple answer to the daunting problem of "how to measure" something. Another section I like a lot is how to "calibrate estimates" - basically it gives really useful, hands-on techniques for getting better at guessing. This is a great tool, not only for measuring, but for any role that requires good estimating. Nothing is perfect, and Hubbard has at least one chapter where I think he failed to simplify life - his chapter on measuring risk was too complicated (unless you are a statistician). Bottom line? Great book - especially for those tasked with collecting the data necessary to measure stuff!

4out of 5Marcelo Bahia–An excellent read. It could be summed up as a "basic statistics for business" book, although it definitely goes beyond that in many aspects. As the title suggests, throughout the whole book the author strongly defends the case that everything can be measured, even though the method may not be obvious at first glance. The book structure basically consists of the explanations of why this is so and various examples and methods that should help the reader to deal with many types of such problems. Alon An excellent read. It could be summed up as a "basic statistics for business" book, although it definitely goes beyond that in many aspects. As the title suggests, throughout the whole book the author strongly defends the case that everything can be measured, even though the method may not be obvious at first glance. The book structure basically consists of the explanations of why this is so and various examples and methods that should help the reader to deal with many types of such problems. Along the way, writing is very clear and reading is more pleasant than you would expect from a "statistics book". This is so because much of the value-added of the book comes not from the quantitative side (which is actually quite basic statistics, something that I see as positive in the context of the book), but from the qualitative analysis and differentiated viewpoint of the author under various circumstances. Actually, he seems knowledgeable and is pretty insightful most of the time, and I expect that the usefulness of each of these insights will depend on your current career and experience. Having worked as a financial analyst in the Brazilian financial markets for the past 8 years, for me the 2 most interesting insights were: 1) His definition of measurement as any number or figure that reduces risk compared to your previous state. I consider this REALLY important in the workplace, as most people consider valid measurements only those ones which can be precisely quantified, preferring ignorance over possible risk-reducing wide-range estimates in all other situations. 2) Due to the above misconception of the definition of measurement, people neglect measurements and estimates exactly in the situations in which they are more useful. When you don't know anything, any imprecise estimate will reduce risk and add value! Looking back, this non-obvious insight is precisely what we needed when facing some specific analytical and decision-making problems in my firm. Overall, this is one of the most interesting books I've read in the past few months, and it should be a great investment of time & money to any professional that mildly deals with quantitative problems at work.

4out of 5Bibhu Ashish–Happened to read the book from IIBA.org site where I have been a member since last year. The best takeaway from the book is the structural thought process it brings in while dealing with intangibles which we always are demotivated to measure. To summarize my learning, I would just mention the below which I have copied from the book. 1-If it's really that important, it's something you can define. If it's something you think exists at all, it's something you've already observed somehow. 2-If it's so Happened to read the book from IIBA.org site where I have been a member since last year. The best takeaway from the book is the structural thought process it brings in while dealing with intangibles which we always are demotivated to measure. To summarize my learning, I would just mention the below which I have copied from the book. 1-If it's really that important, it's something you can define. If it's something you think exists at all, it's something you've already observed somehow. 2-If it's something important and something uncertain, you have a cost of being wrong and a chance of being wrong. 3-You can quantify your current uncertainty with calibrated estimates. 4-You can compute the value of additional information by knowing the "threshold" of the measurement where it begins to make a difference compared to your existing uncertainty. 5-Once you know what it's worth to measure something, you can put the measurement effort in context and decide on the effort it should take. 6-Knowing just a few methods for random sampling, controlled experiments, or even merely improving on the judgments of experts can lead to a significant reduction in uncertainty. One caution though. People who are not that fond of Mathematics and data may find it bit too much, but this book is worth reading at least once.

4out of 5Alok Kejriwal–How to Measure Anything - Book Review. A mentally challenging yet incredibly enlightening book. What’s impressive about the content? - The Art and Science of making guesses. - The ability to use well thought through assumptions and estimate outcomes. - Early examples of the book of legends such as Fermi who asked his students to estimate the number of piano tuners in Chicago (more like the questions you supposedly get asked in a google interview ?) - Bayes Theorem and Bayesian thinking. It's NERDY bu How to Measure Anything - Book Review. A mentally challenging yet incredibly enlightening book. What’s impressive about the content? - The Art and Science of making guesses. - The ability to use well thought through assumptions and estimate outcomes. - Early examples of the book of legends such as Fermi who asked his students to estimate the number of piano tuners in Chicago (more like the questions you supposedly get asked in a google interview ?) - Bayes Theorem and Bayesian thinking. It's NERDY but essential. - Profound amazing examples of how you DON'T have to have too much data to analyse things. - How to INVENT metrics. How the Cleaveland Orchestra started counting 'standing ovations' to measure the success of its new conductor. - The importance of the Confidence Interval (CI). - MONTE CARLO simulations! - How Amazon introduced free wrapping to figure out how many books were gifts! - Q's like: How would you measure the number of fishes in a lake? This a MATH heavy book that takes a LONG time to read. If you don't like numbers & formulas (the book is FULL of them), I suggest you still buy the book and understand what you want.

4out of 5Steve Walker–There is a lot of good information here but it is more of a text book and very dry. I read this book because I have to make decisions every day. Some decisions are very easy because I have the intell and facts that make the decision for me. But other decisions aren't so easy. What are my "real" risks? How do I separate emotion from a decision? What about all the things involved that can't be measured? Ah, that is where this book was insightful an helpful. Hubbard aserts that there isn't anything There is a lot of good information here but it is more of a text book and very dry. I read this book because I have to make decisions every day. Some decisions are very easy because I have the intell and facts that make the decision for me. But other decisions aren't so easy. What are my "real" risks? How do I separate emotion from a decision? What about all the things involved that can't be measured? Ah, that is where this book was insightful an helpful. Hubbard aserts that there isn't anything that can't be measured. Metrics. That is the key to making better decisions. The group I manage has a lot of dynamic and organic tasks to perform each day. I have never been able to quantify a lot of the work we do. That is because I am intrenched in scientific measurements such as average time to handle a customer call. That measurement is meaningless for me. Each call is a different subject. I cannot measure their performance based on how quickly they resolve a call because some problems are simple and others are complex and require enlisting other personnel. But Hubbard teaches many techniques and alternate ways to look at things to get some way of quantifying; perhaps not precisely, but enough to help navigate the myriad pieces of information that can go into a business decision. You have to "want" to read this book. But if you "want" to improve ROI; if you "want" to provide better risk analysis; "if you "want" to be more confident about providing management with your recommendations ... then you'll "want" to read this book.

4out of 5Robert Martin–While analysing digital entertainment stocks over summer, I got stuck on the question of how to value a company's intellectual property. This initially seems like an incredibly difficult task – how can one quantitatively estimate the value of something so intangible? I was able to make progress by considering the following: if you own IP, the next time you want to produce a movie, you get to keep all of the profits rather than having to pay a portion to a licensor; this difference can be thought While analysing digital entertainment stocks over summer, I got stuck on the question of how to value a company's intellectual property. This initially seems like an incredibly difficult task – how can one quantitatively estimate the value of something so intangible? I was able to make progress by considering the following: if you own IP, the next time you want to produce a movie, you get to keep all of the profits rather than having to pay a portion to a licensor; this difference can be thought of as the cash flow profile of the IP. How to Measure Anything is a definitive resource on questions of this nature. The key thesis of HTMA, as suggested in its title, is that any quantity of practical interest can be measured, where "measurement" means a reduction in uncertainty. Hubbard provides a general framework for approaching measurement tasks, including specific techniques and worked examples. HTMA is a difficult book to review; it has too many case studies and anecdotes for it to be a textbook but goes into much more detail than a typical nonfiction book. For example, rather than simply stating that Bayes theorem is the appropriate framework for thinking about many measurement problems, Hubbard actually provides step-by-step walkthroughs of the calculations and discusses how to implement them in excel. I think HTMA is an excellent book for the right audience – I would broadly characterise this audience as practitioners/students of "management science", e.g managers who are facing difficult business questions that look unquantifiable, or mathematicians/physicists who want to learn about business. For a general audience interested in rationality and decision making, I would suggest starting with Superforecasting (many of the concepts are similar). HTMA still has plenty of value, but it is easy to get bogged down by the walkthroughs of the actual calculations (I would have preferred it relegated to the appendix, but I'm not the target audience). The short case studies at the end might be an interesting place to start: if you are as impressed with them as I was, you can dig into the rest of the book for the nuts and bolts. One of them involves the valuation of industry standards; it's cool to see how something so intangible can be quantitatively measured. Some key points: - Everything of practical importance is measurable, else it could not be practical by definition. - Uncertainty-reduction has diminishing returns. When you are very uncertain about something, even a tiny amount of data can massively reduce uncertainty, but to get high levels of precision, you need a lot of data. - Correlation does not imply causation, but it does provide evidence for causation (via Bayes theorem). - It is worth thinking about the meta-question: determining the value of knowing an answer to the question, rather than just valuing the answer to the question. This can tell you how much time/money you should allocate to finding an answer. - Applied Information Economics is a rational framework for approaching business cases: focus on the areas that are most uncertain and consider the value of measurements before designing a quantitative strategy for uncertainty-reduction.

4out of 5Nathan–This is a dense book. It took me several months to get through it, but that was partially because after the refresher on Bayesian Statistics I started reading another textbook on that. If you like math and numbers and analysis and have to make decisions, you'll get some useful information from this book. I built my first Monte Carlo model while walking through this. For years I've been asking friends "How confident are you?" when they give me a binary answer. Eg: Q: Will this be done by Friday? A: This is a dense book. It took me several months to get through it, but that was partially because after the refresher on Bayesian Statistics I started reading another textbook on that. If you like math and numbers and analysis and have to make decisions, you'll get some useful information from this book. I built my first Monte Carlo model while walking through this. For years I've been asking friends "How confident are you?" when they give me a binary answer. Eg: Q: Will this be done by Friday? A: Yes Q: How confident are you? A: 50% After reading this I've taken away the idea of always asking people for a 90% confidence interval. I think one of the most useful (and fun) parts of this book is about the calibration exercise. If you're asked 10 questions and told to provide a 90% confidence interval of where the true answer is then you should get 9 out of the 10 correct. I didn't on my first try, and most people are terrible at it. But apply money to the mix, and people instantly improve. This tip was immediately used in the next model I built :) Here are my notes: Although this may seem a paradox, all exact science is based on the idea of approximation. If a man tells you he knows a thing exactly, then you can be safe in inferring that you are speaking to an inexact man. —Bertrand Russell (1873–1970), British mathematician and philosopher Measurement: A quantitatively expressed reduction of uncertainty based on one or more observations. A mere reduction, not necessarily elimination, of uncertainty will suffice for a measurement. Not only does a true measurement not need to be infinitely precise to be considered a measurement, but the lack of reported error—implying the number is exact—can be an indication that empirical methods, such as sampling and experiments, were not used (i.e., it’s not really a measurement at all). the key lesson is that measurements are more than you knew before about something that matters. A problem well stated is a problem half solved. —Charles Kettering (1876–1958), American inventor, holder of 300 patents, including electrical ignition for automobiles There is no greater impediment to the advancement of knowledge than the ambiguity of words. —Thomas Reid (1710–1769), Scottish philosopher If someone asks how to measure “strategic alignment” or “flexibility” or “customer satisfaction,” I simply ask: “What do you mean, exactly?” It is interesting how often people further refine their use of the term in a way that almost answers the measurement question by itself. Rule of Five: There is a 93.75% chance that the median of a population is between the smallest and largest values in any random sample of five from that population. The only valid reason to say that a measurement shouldn’t be made is that the cost of the measurement exceeds its benefits. Usually, Only a Few Things Matter—But They Usually Matter a Lot In most business cases, most of the variables have an “information value” at or near zero. But usually at least some variables have an information value that is so high that some deliberate measurement effort is easily justified. what makes a measurement of high value is a lot of uncertainty combined with a high cost of being wrong. Ignorance is never better than knowledge. —Enrico Fermi, winner of the 1938 Nobel Prize for Physics Four Useful Measurement Assumptions: It’s been measured before. You have far more data than you think. You need far less data than you think. Useful, new observations are more accessible than you think. the first few observations are usually the highest payback in uncertainty reduction for a given amount of effort. In fact, it is a common misconception that the higher your uncertainty, the more data you need to significantly reduce it. Again, when you know next to nothing, you don’t need much additional data to tell you something you didn’t know before. A decision has two or more realistic alternatives. merely decomposing highly uncertain estimates provides a huge improvement to estimates. As the great statistician George Box put it, “Essentially, all models are wrong, but some are useful.” the subjective estimates of some persons are demonstrably—measurably—better than those of others. the ability of a person to assess odds can be calibrated—just like any scientific instrument is calibrated to ensure it gives proper readings. assessing uncertainty is a general skill that can be taught with a measurable improvement. we are simply not wired to doubt our own proclamations once we make them. I also asked experts who are providing range estimates to look at each bound on the range as a separate “binary” question. A 90% CI means there is a 5% chance the true value could be greater than the upper bound and a 5% chance it could be less than the lower bound. This means that estimators must be 95% sure that the true value is less than the upper bound. If they are not that certain, they should increase the upper bound until they are 95% certain. I sometimes call this the “absurdity test.” It reframes the question from “What do I think this value could be?” to “What values do I know to be ridiculous?” We look for answers that are obviously absurd and then eliminate them until we get to answers that are still unlikely but not entirely implausible. This is the edge of our knowledge about that quantity. Assumptions about quantities are necessary if you have to use deterministic accounting methods with exact points as values. You could never know an exact point with certainty so any such value must be an assumption. But if you are allowed to model your uncertainty with ranges and probabilities, you do not have to state something you don’t know for a fact. If you are uncertain, your ranges and assigned probabilities should reflect that. If you have “no idea” that a narrow range is correct, you simply widen it until it reflects what you do know—with 90% confidence. When it comes to assessing your own uncertainty, you are the world’s leading expert. Once calibrated, you are a changed person. You have a keen sense of your level of uncertainty.” It is better to be approximately right than to be precisely wrong. —Warren Buffett It is the mark of an educated mind to rest satisfied with the degree of precision which the nature of the subject admits and not to seek exactness where only an approximation is possible. —Aristotle For most problems in statistics and measurement, we are asking, “What is the chance the truth is X, given what I’ve seen?” Again, it’s actually often easier to answer the question, “If the truth was X, what was the chance of seeing what I did?” Bayesian inversion allows us to answer the first question by answering the second, easier question. When we examine our own behaviors closely, it’s easy to see that only a hypocrite says “Life is priceless.” any fair researcher should always be able to say that sufficient empirical evidence would change their mind. If it’s really that important, it’s something you can define. If it’s something you think exists at all, it’s something you’ve already observed somehow. If it’s something important and something uncertain, you have a cost of being wrong and a chance of being wrong.

4out of 5Jon–Simply put the first half of this is just awesome. As I listened to this via audio the second half is plagued by many formulas that doesn’t translate or understood well when listened to. The second half is also very heavily into statistics which could be a somewhat laborious read for some. The first half is very recommended as it goes into what it means to “measure” something and suggest some very fundamental questions regarding measuring. E.g.: What is it you want to have measured? E.g. what does Simply put the first half of this is just awesome. As I listened to this via audio the second half is plagued by many formulas that doesn’t translate or understood well when listened to. The second half is also very heavily into statistics which could be a somewhat laborious read for some. The first half is very recommended as it goes into what it means to “measure” something and suggest some very fundamental questions regarding measuring. E.g.: What is it you want to have measured? E.g. what does security mean for you? Why is this important for you? How much is this measurement worth to you? What do you know now about the problem now? Hubbard gives tools for solving problem e.g. the Fermi and the baysian toolbox that allows a rough estimation of practically anything. Hubbard also gives some very good pointers as to how you calibrate yourself to counteract psychological biases. If you read it, make sure you dedicate a good amount of time on the first half as imo, this is where most of the loot is located.

4out of 5Karen–This book is a *must read* for anyone who needs to compile coherent, useful business cases in scenarios where "intangibles" exist. As the author notes, it's not about compiling exact data but rather reducing uncertainty (and thus risk). I found the concepts and approaches outlined by this book to be very useful. While not a whispersync title, I bought both the kindle version and the audible.com version -- while it's nice to have the concepts available for visual review, I found the audible.com v This book is a *must read* for anyone who needs to compile coherent, useful business cases in scenarios where "intangibles" exist. As the author notes, it's not about compiling exact data but rather reducing uncertainty (and thus risk). I found the concepts and approaches outlined by this book to be very useful. While not a whispersync title, I bought both the kindle version and the audible.com version -- while it's nice to have the concepts available for visual review, I found the audible.com version to be more compelling and interesting. Highly recommended.

5out of 5Rick Howard–Douglas Hubbard’s "How to Measure Anything: Finding the Value of "Intangibles" is an excellent candidate for the Cybersecurity Canon Hall of Fame. He describes how it is possible to collect data to support risk decisions for even the hardest kinds of questions. He says that that network defenders do not have to have 100% accuracy in our models to help support these risk decisions. We can strive to simply reduce our uncertainty about ranges of possibilities. He writes that this particular view of Douglas Hubbard’s "How to Measure Anything: Finding the Value of "Intangibles" is an excellent candidate for the Cybersecurity Canon Hall of Fame. He describes how it is possible to collect data to support risk decisions for even the hardest kinds of questions. He says that that network defenders do not have to have 100% accuracy in our models to help support these risk decisions. We can strive to simply reduce our uncertainty about ranges of possibilities. He writes that this particular view of probability is called Bayesian and it has been out of favor within the statistical community until just recently when it became obvious that it worked for a certain set of really hard problems. He describes a few simple math tricks that all network defenders can use to make predictions about risk decisions for our organizations. He even demonstrates how easy it is for network defenders to run our own Monte Carlo simulations using nothing more than a spreadsheet. Because of all of that, "How to Measure Anything: Finding the Value of "Intangibles" is indeed a Cybersecurity Canon Hall of Fame candidate and you should have read it by now. Introduction The Cybersecurity Canon project is a “curated list of must-read books for all cybersecurity practitioners – be they from industry, government or academia — where the content is timeless, genuinely represents an aspect of the community that is true and precise, reflects the highest quality and, if not read, will leave a hole in the cybersecurity professional’s education that will make the practitioner incomplete.” [1] This year, the Canon review committee inducted this book into the Canon Hall of Fame: “How to Measure Anything in Cybersecurity Risk," by Douglas W. Hubbard and Richard Seiersen. [2] [3] According to the Canon committee member reviewer, Steve Winterfeld, "How to Measure Anything in Cybersecurity Risk” is an extension of Hubbard’s successful first book, “How to Measure Anything: Finding the Value of “Intangibles” in Business. It lays out why statistical models beat expertise every time. It is a book anyone who is responsible for measuring risk, developing metrics, or determining return on investment should read. It provides a strong foundation in qualitative analytics with practical application guidance." [4] I personally believe that precision risk assessment is a key and currently missing element in the CISO’s bag of tricks. As a community, network defenders in general are not good at transforming technical risk into business risk for the senior leadership team. For my entire career, I have gotten away with listing the 100+ security weaknesses within my purview and giving them a red, yellow, or green labels to mean bad, kind-of-bad, or not bad. If any of my bosses would have bothered to ask me why I gave one weakness a red label vs a green label, I would have said something like: “25 years of experience, Blah, Blah, Blah, Trust Me, Blah, Blah, Blah, can I have the money please?” I believe the network defender’s inability to translate technical risk into business risk with any precision is the reason that the CISO is not considered at the same level as other senior C-Suite executives like the CEO, the CFO, the CTO and the CMO. Most of those leaders have no idea what the CISO is talking about. For years, network defenders have blamed these senior leaders for not being smart enough to understand the significance of the security weaknesses we bring to them. But I assert that it is the other way around. The network defenders have not been smart enough to convey the technical risks to business leaders in a way they might understand. This CISO inability is the reason that the Canon Committee inducted "How to Measure Anything in Cybersecurity Risk,” and another precision risk book called “Measuring and Managing Information Risk: A FAIR Approach” into the Canon Hall of Fame. [5][4][3][6] [7]. These books are the places to start if you want to educate yourself on this new way of thinking about risk to the business. For me though, this is not an easy subject. I slogged my way through both of these books because basic statistical models completely baffle me. I took stat courses in college and grad school but sneaked through them by the skin of my teeth. All I remember about stats was that it was hard. When I read these two books, I think I only understood about a three-quarters of what I was reading not because they were written badly but because I struggled with the material. I decided to get back to the basics and read Hubbard’s original book that Winterfeld referenced in his review: “How to Measure Anything: Finding the Value of “Intangibles” in Business” to see if it was also Canon worthy. The Network Defender’s misunderstanding of Metrics, Risk Reduction and Probabilities Throughout the book, Hubbard emphasizes that seemingly dense and complicated risk questions are not as hard to measure as you might think. He reasons from scholars like Edward Lee Thorndike and Paul Meehl from the early twentieth-century about Clarification Chains: If it matters at all, it is detectable/observable. If it is detectable, it can be detected as an amount (or range of possible amounts). If it can be detected as a range of possible amounts, it can be measured. [8] As a network defender, whenever I think about capturing metrics that will inform how well my security program is doing, my head begins to hurt. Oh, there are many things that we could collect – like outside IP addresses hitting my infrastructure, security control logs, employee network behavior, time to detect malicious behavior, time to eradicate malicious behavior, how many people must react to new detections, etc. – but it is difficult to see how that collection of potential badness demonstrates that I am reducing material risk to my business with any precision. Most network defenders in the past, including me, have simply thrown our hands up in surrender. We seem to say to ourselves that if we can’t know something with 100% accuracy or if there are countless intangible variables with many veracity problems, then it is impossible to make any kind of accurate prediction about the success or failure of our programs. Hubbard makes the point that we are not looking for 100% accuracy. What we are really looking for is a reduction in uncertainty. He says that the concept of measurement is not the elimination of uncertainty but the abatement of it. If we can collect a metric that helps us reduce that uncertainty, even if it is just by a little bit, then we have improved our situation from not knowing anything to knowing something. He says that you can learn something from measuring with very small random samples of a very large population. You can measure the size of a mostly unseen population. You can measure even when you have many, sometimes unknown, variables. You can measure the risk of rare events. Finally, Hubbard says that you can measure the value of subjective preferences like art or free time or life in general. According to Hubbard, “We quantify this initial uncertainty and the change in uncertainty from observations by using probabilities.” [8] These probabilities refer to our uncertainty state about a specific question. The math trick that we all need to understand is allowing for ranges of possibilities that we are 90% sure the true value lies between. For example, we may be trying to reduce the number of humans that have to respond to a cyberattack. In this fictitious example, last year the Incident Response Team handled 100 incidents with three people each; a total of 300 people. We think that installing a next generation firewall will reduce that number. We don’t know exactly how many but some. We start here to bracket the question. Do we think that installing the firewall will eliminate the need for all humans to respond? Absolutely not. What about reducing the number to three incidents with three people for a total of nine. Maybe. What about reducing the number to 10 incidents with three people for a total of 30. That might be possible. That is our lower limit. Let’s go to the high side. Do you think that installing the firewall will have zero impact in reducing the number? No. What about 90 attacks with three people for a total of 270? Maybe. What about 85 attacks with three people for a total of 255? That seems reasonable. That is our upper limit. By doing this bracketing we can say that we are 90% sure that installing the next generation firewall will reduce the number of humans that have to respond to cyber incidents from 300 to between 30 and 255. Astute network defenders will point out that this range is pretty wide. How is that helpful? Hubbard says that first, you now know this where before you didn’t know anything. Second, this is the start. You can now collect other metrics perhaps that night help you reduce the gap. The History of Scientific Measurement Evolution This particular view of probabilities, the idea that there is a range of outcomes that you can be 90% sure about, is the Bayesian interpretation of probabilities. Interestingly, this different view of statistics has not been in favor since its inception when Thomas Bayes penned the original formula back in the 1740s. The naysayers originated from the Frequentists. Their theory said that the probability of an event can only be determined by how many times it has happened in the past. To them, modern science requires both objectivity and precise answers. According to Hubbard, “The term ‘statistics’ was introduced by the philosopher, economist, and legal expert Gottfried Achenwall in 1749. He derived the word from the Latin statisticum, meaning ‘pertaining to the state.’ Statistics was literally the quantitative study of the state.” [8] In the Frequentist view, the Bayesian philosophy requires a measure of “belief and approximations. It is subjectivity run amok, ignorance coined into science.” [7] But the real world has problems where the data is scant. Leaders worry about potential events that have never happened before. Bayesians were able to provide real answers to these kinds problems like the defeating of the Enigma encryption machine in World War II and finding a lost and sunken nuclear submarine that was the basis for the movie “Hunt for Red October.” But It wasn’t until the early 1990s when the theory became commonly accepted. [7] Hubbard walks the reader through this historical research about the current state in scientific measurement. He explains how Paul Meehl in the early 1900s demonstrated time and again that statistical models outperformed human experts. He describes the birth of Information Theory with Claude Shannon in the late 1940s and credits Stanley Smith Stevens around the same time with crystalizing different scales of measurement from sets, to ordinals, to ratios and to intervals. He reports how Amos Tversky and Daniel Kahneman, through their research in the 1960s and 1970s, demonstrated that we can improve our measurements around subjective probabilities. In the end, Hubbard defines measurement as this Measurement: A quantitatively expressed reduction of uncertainty based on one or more observations. [8] Simple Math Tricks Hubbard explains two math tricks that, after reading, seem impossible to be true, but when used by a Bayesian proponents, greatly simplify measurement-taking for difficult problems. The Power of Small Samples: The Rule of Five: There is a 93.75% chance that the median of a population is between the smallest and largest values in any random sample of five from that population. [8] The Single Sample Majority Rule (i.e., The Urn of Mystery Rule): Given maximum uncertainty about a population proportion—such that you believe the proportion could be anything between 0% and 100% with all values being equally likely—there is a 75% chance that a single randomly selected sample is from the majority of the population. [8] I admit that the math behind these rules escapes me. But I don’t have to understand the math to use the tools. It reminds me of a moving scene from one of my favorite movies: “Lincoln.” President Lincoln, played brilliantly by Daniel Day-Lewis, discusses his reasoning for keeping the southern agents, who want to discuss peace before the 13th Amendment is passed, away from Washington. "Euclid's first common notion is this. Things that are equal to the same thing are equal to each other. That's a rule of mathematical reasoning. It's true because it works. Has done and always will do.” [9] The bottom line is that statistically significant does not mean a large number of samples. Hubbard says that statistical significance has a precise mathematical meaning that most lay people do not understand and many scientists get wrong most of the time. For the purposes of risk reduction, stick to the idea of a 90% confidence interval regarding potential outcomes. The Power of Small Samples and the Single Sample Majority Rule are rules of mathematical reasoning that all network defenders should keep handy in their utility belts as they measure risk in their organizations. Simple Measurement Best Practices and Definitions As I said before, most network defenders think that measuring risk in terms of cyber security is too hard. Hubbard explains four rules of thumb that every practitioner should consider before they give up: It’s been measured before. You have far more data than you think. You need far less data than you think. Useful, new observations are more accessible than you think. [8] He then defines “uncertainty” and “risk” through a possibility and probabilistic lens: Uncertainty: The lack of complete certainty, that is, the existence of more than one possibility. Measurement of Uncertainty: A set of probabilities assigned to a set of possibilities. Risk: A state of uncertainty where some of the possibilities involve a loss, catastrophe, or other undesirable outcome. Measurement of Risk: A set of possibilities each with quantified probabilities and quantified losses. [8] In the network defender world, we tend to define risk in terms of threats and vulnerabilities and consequences. [10] Hubbard’s relatively new take gives us a much more precise way to think about these terms. Monte Carlo Simulations According to Hubbard, the invention of the computer made it possible for scientists to run thousands of experimental trials based on probabilities for inputs. These trials are called Monte Carlo simulations. In the 1930s, Enrico Fermi used the method to calculate neutron diffusion by hand with human mathematicians calculating the probabilities. In the 1940s, Stanislaw Ulam, John von Neumann, and Nicholas Metropolis realized that the computer could automate the Monte Carlo method and help them design the atomic and hydrogen bombs. Today, everybody that has access to a spreadsheet can run their own Monte Carlo simulations. For example, if you take my previous example of trying to reduce the number of humans that have to respond to a cyberattack. We said that during the previous year, 300 people responded to a cyberattack. We said that we were 90% certain that the installation of a next generation firewall would result in a reduction of the humans that have to respond to an incident to between 30 and 255 humans. We can refine that number even more by simulating hundreds or even thousands of scenarios inside a spreadsheet. I did this myself by setting up 100 scenarios where I randomly picked a number between 0 and 300. I calculated the mean to be 131 and the standard deviation to be 64. Remember that the standard deviation is nothing more than a measure of spread from the mean. [11][12][13] The rule of 68–95–99.7 says that 68% of the recorded values will fall within the first standard deviation. 95% will fall within the second standard deviation. 97.7% will fall within the third standard deviation. [8] With our original estimate, we said there was a 90% chance that the number is between 30 and 255. After running the Monte Carlo simulation, we can say that there is 68% chance that the number is between 76 and 248. How about that? Even a statistical luddite can run his own Monte Carlo simulation. Conclusion After reading Hubbard’s second book in the series, “How to Measure Anything in Cybersecurity Risk," I decided to go back to the original to see if I could understand with a bit more clarity exactly how the statistical models worked and to determine if the original was Canon worthy too. I learned that there was probably a way to collect data to support risk decisions for even the hardest kinds of questions. I learned that network defenders do not have to have 100% accuracy in our models to help support these risk decisions. We can strive to simply reduce our uncertainty about ranges of possibilities. I learned that this particular view of probability is called Bayesian and it has been out of favor within the statistical community until just recently when it became obvious that it worked for a certain set of really hard problems. I learned that there are a few simple math tricks that we can all use to make predictions about these really hard problems that will help us make risk decisions for our organizations. And I even learned how to build my own Monte Carlo simulations to supports those efforts. Because of all of that, "How to Measure Anything: Finding the Value of "Intangibles" is indeed Canon worthy and you should have read it by now. Sources [1] "Cybersecurity Canon: Essential Reading for the Security Professional," by Palo Alto Networks, Last Viewed 5 July 2017, https://www.paloaltonetworks.com/thre... [2] "Cybersecurity Canon: 2017 Award Winners," by Palo Alto Networks, Last Visited 5 July 2017, https://cybercanon.paloaltonetworks.c... [3] " 'How To Measure Anything in Cybersecurity Risk' - Cybersecurity Canon 2017," Video Interview by Palo Alto Networks, Interviewer: Canon Committee Member, Bob Clark, Interviewees Douglas W. Hubbard and Richard Seiersen, 7 June 2017, Last Visited 5 July 2017, https://www.youtube.com/watch?v=2o_mA... [4] "The Cybersecurity Canon: How to Measure Anything in Cybersecurity Risk," Book review by Canon Committee Member, Steve Winterfeld, 2 December 2016, Last Visited 5 July 2017, https://cybercanon.paloaltonetworks.com/ [5] "How to Measure Anything in Cybersecurity Risk," by Douglas W. Hubbard and Richard Seiersen, Published by Wiley, April 25th 2016, Last Visited 5 July 2017, https://www.goodreads.com/book/show/2... [6] "The Cybersecurity Canon: Measuring and Managing Information Risk: A FAIR Approach," Book review by Canon Committee Member, Ben Rothke, 10 September 2015, Last Visited 5 July 2017, https://researchcenter.paloaltonetwor... [7] "Sharon Bertsch McGrayne: 'The Theory That Would Not Die' | Talks at Google," by Sharon Bertsch McGrayne, Google, 23 August 2011, Last Visited 7 July 2017, https://www.youtube.com/watch?v=8oD6e... [8] "How to Measure Anything: Finding the Value of "Intangibles" in Business," by Douglas W. Hubbard, Published by John Wiley & Sons, 1985, Last Visited 10 July 2017, https://www.goodreads.com/book/show/4... [9] "Lincoln talks about Euclid," by Alexandre Borovik, The De Morgan Forum, 20 December 2012, Last Visited 10 July 2017, http://education.lms.ac.uk/2012/12/li... [10] BITSIGHT SECURITY RATINGS BLOG," by MELISSA STEVENS, 10 JANUARY 2017, Last Visited 10 July 2017, https://www.bitsighttech.com/blog/cyb... [11] "Standard Deviation - Explained and Visualized," by Jeremy Jones, YouTube, 5 April 2015, Last Visited 9 July 2017, https://www.youtube.c

4out of 5Paulo Saraiva–To put it simple: the best book I ever read about risk management. If you want great and practical insights about what you need to measure when it comes to problem solving or decision making, this is a masterpiece. Here you will find a lot of Mathematical tools that are extremely useful in clarifying situations in which we use to think that there is no ways to perform objective measurement, specifically about what we usually call "intangibles". Even when it comes to psychology of decision making, To put it simple: the best book I ever read about risk management. If you want great and practical insights about what you need to measure when it comes to problem solving or decision making, this is a masterpiece. Here you will find a lot of Mathematical tools that are extremely useful in clarifying situations in which we use to think that there is no ways to perform objective measurement, specifically about what we usually call "intangibles". Even when it comes to psychology of decision making, Hubbard proposes pragmatic ways to translate some of the most acknowledged theories and models into Mathematical language. The chapters covering the Lens and Rasch models are particularly remarkable. A modern and combative stance against subjectivity that permeates most risk management tools that are widely used in organizations.

4out of 5Kevin–Tedious to read. Unless you are wanting a statistics course. I was looking for the theory, not the equations. I don't think the entirety of the book was worth the few nuggets I pulled out. The cliff notes amount to: measurement is about uncertainty reduction, not necessarily uncertainty elimination. Don't forego trying to measure something just because you know it won't be a perfect measurement. Is it a better measurement than what you're currently using? Will it be valuable in making a decision Tedious to read. Unless you are wanting a statistics course. I was looking for the theory, not the equations. I don't think the entirety of the book was worth the few nuggets I pulled out. The cliff notes amount to: measurement is about uncertainty reduction, not necessarily uncertainty elimination. Don't forego trying to measure something just because you know it won't be a perfect measurement. Is it a better measurement than what you're currently using? Will it be valuable in making a decision? How much is on the line in that decision? There was another chestnut he had about the animosity towards statistics: When people say that you can prove anything with statistics, they probably don’t really mean statistics. They just mean broadly the use of numbers, especially for some reason percentages. And they really don’t mean "anything" or "prove". What they really mean is that numbers can be used to confuse people. Especially the gullible ones lacking basic skills with numbers.

4out of 5Jason–A dense, hard to read book, but so worth it It’s been awhile since I read (and finished) a book so dense and complicated. It was worth it though as it changed so much of how I think about everything. From work and estimation with prioritization to all the data that is around us everyday. So. Very good.

5out of 5Emil O. W. Kirkegaard–Kind of an introduction to applied decision theory, with some good stuff about how to quantify things.

5out of 5Daniel Hageman–Fantastic book for anyone worried that our lack of certainty in measurement techniques implies a categorical inability to measure in principle.

5out of 5Allison–Lots of great commentary on why using data is important... his processes for measurement are less... interesting? A good read for data people. :)

4out of 5Vlad Ardelean–Oh boy, I've been waiting a long time to review this one. I'll start with the good parts, as they're few and far between. I've also posted this review directly from the kindle app twice already, and it doesn't show up, so this is my 3rd attempt to post a review for this book: The good parts: I learnt how to measure the population of fish in a lake. That's quite cool! I will not give a spoiler here, enough to say that it involves catching and tagging the fish. Then I learnt a few statistics factlets Oh boy, I've been waiting a long time to review this one. I'll start with the good parts, as they're few and far between. I've also posted this review directly from the kindle app twice already, and it doesn't show up, so this is my 3rd attempt to post a review for this book: The good parts: I learnt how to measure the population of fish in a lake. That's quite cool! I will not give a spoiler here, enough to say that it involves catching and tagging the fish. Then I learnt a few statistics factlets. For instance, in a normal distribution, 90% of the measurements will fit in the interval of +- 1.645 standard deviations (3.29 sigmas). I also learned how I can get ~95% confidence that if I ask 5 random people how long it takes them to get to work, the population median will be between the maximum and minimum of those 5 values...regardless of the size of the population. These are just statistical truths, no debate there. I also learnt about Emily Rosa who debunked the claims of "touch healing" therapists regarding them being able to detect auras... spoiler: they couldn't do it, or at least couldn't show they're better than tossing a coin. I learned about how Enrico Fermi was really good at estimation problems using just his available knowledge. I learnt about Eratosthenes, which estimated the radius of the Earth with quite high accuracy! It was fun. Other nice things in the book were mentions of the Rausch and Lens decision models, and Monte Carlo simulations for assisting in decisions. Then Daniel Kahneman (and some other ppl) are mentioned for contributions to psychology whereby they show consistent flaws in human thinking (we're very bad at estimating extremely rare events) There's some talk about Bayesian statistics compared to the "frequentist" interpretation. Another thing that surprised me was that the author talks at length about these magical people called "calibrated estimation experts". Apparently (and there's literature with more evidence for this to show), you can train yourself to give answers AND then the probability of the answer being right. For instance, I don't know when Napoleon was born, but I can say with 90% certainty that it was between 1750 and 1850. Apparently, you can train yourself to become very good at providing that probability. The author then provides a few tricks on how to better give a probability for "guessing" answers. This sums up the good parts of the book. I have not provided more details here, but rest assure you won't find much more details than this in the book. The bad parts: The author bashes and mocks people so much, it's unreal. He especially has a deep hate towards managers. Here's some "statistical" evidence: I counted the number of times the author wrote the word "managers" in the book. It's 79. Here are a few quotes, and they go on and on and on ...and on: "I heard managers say that since each product is unique, they cannot extrapolate..." "I have known managers who simply presume the superiority of their intuition..." "...it simply won't occur to many managers that an "intangible" can be measured" "...her examples prove what can be done by most managers if they tried" "...Other managers might object: "there is no way to measure that thing without spending millions" " "Once managers figure out what they mean and why it matters, the issues in questions starts to look a lot more measurable" "Business managers need to realize that some things seem intangible only because they just haven't defined what they are talking about" "The problem is that when managers make choices about whether the bother to do a random sample in the first place, they are making the judgments intuitively..." "But it has some significant advantages over much of the current measurement-stalemate thinking of some managers" Maybe not all mentions of the word "managers" have a directly bad connotation, but I'm quite sure none of those mentions put managers in a good light. There's more! The author uses another formula to mock people, and that is "those who...". I searched for usages of that formula -> 45. I won't quote, but I hope you got the idea. More bad things. Remember when I wrote about Emily Rosa and her debunking of supernatural powers? The author has an interesting fascination with coming back to her example. He does this 97 times in fact! With 410 pages, that gives a mention about Emily every 4.23 pages. Enrico Fermi and Eratosthenes get less attention with only 52 and 37 mentions throughout the book. Still, I think it's fair to say that repetition is an issue with this book. To top it off, the author has the arrogance to claim that with a book such as this one, Eratosthenes, Emily Rosa and Enrico Fermi would probably have been able to do a lot more. More bad things: The author claims that there are plenty of statistics books, and this is not one of them. He advertises his book as providing general ideas applicable everywhere. Among those ideas are things like "measurements help in making a decision", "there's always more information than you think you have", "you always need less information than you think you do", "measure the things that are most important" and "take into account if the price of the measurement is lower than the cost of the decision". Am I alone in thinking that these ideas are so trivial that a book about them is not really valuable? Also, since he's talking about decisions, he never mentions the time aspect of a measurement, just the price. You'd think he might maybe consider that, but nope! I don't understand who the target audience for this book is. Is it the "managers" the author continuously mocks? Not likely. Is it people who want to learn how to measure? Likely not also, because this book doesn't really teach any measurement techniques, it just mentions 3 decision making models which he barely explains. Even more bad: The author introduces the terms that I talk about in the "good" section. That's all he does, he "introduces" them. I did learn statistics while reading this book, but it's because I spent a lot of time on Wikipedia. The author doesn't try to rigorously explain these concepts. At most, you get from him recipes like this: 1. Note down the numbers you get from doing X 2. Take the average of those numbers 3. Subtract the average from each number 4. Multiply the difference by 1.645 5. etc etc (This is not an example from the book. This is just my impersonation of the author's examples. They are hard to follow on kindle. There are not enough explanations, and then you're just left with a recipe) Next to "not explaining complex concepts", the author also over-explains simple ones, again in a very repetitive fashion. There are a lot of unnecessary explanations regarding very simple graph. There's one graph illustrating the price of measurement versus the value of information. The price of measurement rises slowly at first, and increases fast when the amount of information approaches perfect information. The value of information is the opposite. It rises very much at first, but then only very slowly towards the maximum amount of information. I'm not sure how much time the author spends on this, but I did have the feeling that it's ridiculous, so I'm reporting on the incident. It's not the only incident like this. The ugly: This part is my personal interpretation of the author's intent, based on the book content. The author seems to emphasize quite a lot that he has a company who offers calibration training to people. Therefore I think, and so it seems to me, that at least in part, the motivation for this book was to self-advertise. This would be fair if stated up-front. It was not stated upfront though. The author might also have been using the "statistical" fact that one can charge more for longer books. Clever, but I'm asking for my money back on this one. DO NOT READ THIS BOOK! IT'S TOO LONG AND REPETITIVE TO BE A GOOD INTRODUCTORY BOOK, AND CONTAINS FAR TOO LITTLE INFORMATION FOR IT TO BE ANYTHING ELSE.

5out of 5Stephen Rynkiewicz–Classical Greeks not only figured out that the planet is round, but had it measured. Eratosthenes calculated its circumference from a lunch-hour measurement at his library in Alexandria during the summer solstice, knowing only his distance from the Tropic of Cancer. Eratosthenes is a hero of Chicago statistician Doug Hubbard, who trains managers in "calibrated estimates," basically closely observed ballpark figures. Here he describes approaches to making more accurate guesses, including when it' Classical Greeks not only figured out that the planet is round, but had it measured. Eratosthenes calculated its circumference from a lunch-hour measurement at his library in Alexandria during the summer solstice, knowing only his distance from the Tropic of Cancer. Eratosthenes is a hero of Chicago statistician Doug Hubbard, who trains managers in "calibrated estimates," basically closely observed ballpark figures. Here he describes approaches to making more accurate guesses, including when it's worth spending money to take out some of the guesswork. If you didn't get past introductory statistics in college, this is a useful guide to Monte Carlo simulations, Baysean inversion, crowdsourcing and other analytical concepts. Not only does Hubbard open up the black box of predictive modeling, but he also points to ways we can think about thinking: It's risky to rely on just gut instinct, but maybe we can trust our gut once we measure just how far to trust it.

5out of 5Chris–I listened to the audio book and it was interesting. I enjoyed the opening and the Enrico Fermi example of the scraps of paper at Alamogordo where the first nuclear bomb was tested to estimate yield of the bomb and the piano tuners in the city of Chicago. The author describes these questions as Fermi questions and to use them to estimate things you know to arrive at a measurement. The author goes through Confidence Intervals (CI) and how to determine them. He also details some simple Statistics I listened to the audio book and it was interesting. I enjoyed the opening and the Enrico Fermi example of the scraps of paper at Alamogordo where the first nuclear bomb was tested to estimate yield of the bomb and the piano tuners in the city of Chicago. The author describes these questions as Fermi questions and to use them to estimate things you know to arrive at a measurement. The author goes through Confidence Intervals (CI) and how to determine them. He also details some simple Statistics as well as some more advanced Statistics too. The question is posed too about whether a Sales or Business decision was ever made by some with some report they had and this was interesting. The short message of the book is that with some questions and thought you can measure anything. I definitely recommend this book.

5out of 5Jeff Yoak–This was a fantastic read. It helps with general numeracy as well as providing an overview on how to think about measurement and statistics practically. This is an area where I have some experience and I still learned a lot. This book, especially the first half, should be accessible to everyone. The second half is a bit more technical and I wished I had been reading in paper instead of in audio. I may do that eventually. The pacing is a little hard in audio and I could have benefited from notes, This was a fantastic read. It helps with general numeracy as well as providing an overview on how to think about measurement and statistics practically. This is an area where I have some experience and I still learned a lot. This book, especially the first half, should be accessible to everyone. The second half is a bit more technical and I wished I had been reading in paper instead of in audio. I may do that eventually. The pacing is a little hard in audio and I could have benefited from notes, but still... a great read and actively beneficial.

5out of 5Christophe Addinquy–This is a high density reading, by all means ! Inside this text, you'll discover how to "calibrate" your estimation and more importantly how to decompose a complexe problem into more manageable chunks. The statistical tools, however, require a more substantiel statical knowledge (helle, Monte-Carlo !). This is an impressive reading, and not a quick one ! Ma note de lecture en Français ici This is a high density reading, by all means ! Inside this text, you'll discover how to "calibrate" your estimation and more importantly how to decompose a complexe problem into more manageable chunks. The statistical tools, however, require a more substantiel statical knowledge (helle, Monte-Carlo !). This is an impressive reading, and not a quick one ! Ma note de lecture en Français ici

5out of 5June Ding–The title made me curious. The author did make the case that anything can be measured including many things that we consider abstract or intangible. The stories it gave at the start of the book is fascinating and opened my mind about what we think measurement really is. There is no perfect measurement. There is no absolute truth. Measurement is a quantitativly expressed reduction of uncertainty based on one or more observations. I also find the methods to define the problem and the notion that a The title made me curious. The author did make the case that anything can be measured including many things that we consider abstract or intangible. The stories it gave at the start of the book is fascinating and opened my mind about what we think measurement really is. There is no perfect measurement. There is no absolute truth. Measurement is a quantitativly expressed reduction of uncertainty based on one or more observations. I also find the methods to define the problem and the notion that a measurement has to support a decision is helpful.

5out of 5Kc–I purchased this book because I am in the middle of a project where I have to measure an "intangible". I liked the author's ideas on breaking down a measurement and figuring out the uncertainty factor on each variable. The information he provided helped me to find a solution for my project. I purchased this book because I am in the middle of a project where I have to measure an "intangible". I liked the author's ideas on breaking down a measurement and figuring out the uncertainty factor on each variable. The information he provided helped me to find a solution for my project.

4out of 5Pauli Kongas–Perhaps not the best read in audio because of some math and a lot of pictures etc.

5out of 5James–Not quite what it says on the cover, but still seems fairly useful. This is basically a book about how to reduce one's uncertainty about empirical outcomes - kind of like a leveling-up guide for a Metaculus predictor or something. There were a few useful bits of content, but I found that most of the information in here was old hat. Most of value from this read was from food for thought rather than direct lessons from the author. Notes: • Book claims to be about how to measure "intangible" busines Not quite what it says on the cover, but still seems fairly useful. This is basically a book about how to reduce one's uncertainty about empirical outcomes - kind of like a leveling-up guide for a Metaculus predictor or something. There were a few useful bits of content, but I found that most of the information in here was old hat. Most of value from this read was from food for thought rather than direct lessons from the author. Notes: • Book claims to be about how to measure "intangible" business variables, like flexibility of a project, effectiveness of management, or public image. • Proposes developing strategies by not only considering the unknowns, but calculating the economic value of additional information and quantifying where value is high. ○ This way more useful if the value of information about different variables is power-law-like. Author suggests that is true in business strategy • "Power tools" method - not necessary to memorize all the inner workings of the tools you want to use. Pre-made spreadsheets and copy-pasted code is fine. But like with any power tool, some basic safety training and understanding is required to not have accidents. • When author says "measurement", what he means is "quantifiable reduction in uncertainty" ○ Draws information theoretic parallel - reducing uncertainty about a quantity means opening an information channel from the true value to your probability distribution ○ Measured values can be anywhere from fully cardinal to only nominal, what matters to satisfy the "quantifiability" criterion of author's definition is quantifiable uncertainty ○ This book is really about how to squeeze information out of the universe more effectively. Sick • When firms are concerned with measuring something they think is intangible, it's often the ultimate impacts of that intangible variable that they're really concerned with. Measuring those impacts instead is a quick solution • "Rule of Five": if you take a sample from a population where n=5, there's a 97.35% chance that the population median is within the range of your sample. This always works, regardless of the dist - there is always a 0.5^4 chance that all five will land to the same side of the median. Median might not always be useful, but this is a really cool trick. ○ Up it to n=6 and you get 1-0.5^5 = 0.9688 , which qualifies as a conventionally statistically significant confidence interval (for the median) • "Mystery Urn": when trying to determine a population proportion (2 types) and starting from complete ignorance, there is a 75% chance for a single sample, n=1, to belong to the majority class • Author claims people tend to give up on trying to get info about their problem because they think it's more unique than it really is, underestimate the volume of data they have access to that carries at least some relevant information, and over-estimate how much data they will need to reach some sort of useful result • Research from Wharton in the 90's has showed that Fermi decomposition into less than five components can cut off 2 or 3 orders of magnitude of error ○ Has anyone tested Law of Large Numbers as potential mechanism? Does the error reduction scale correctly with number variables you decompose into? • ET Jaynes has done work suggesting even quantum randomness is observer uncertainty, not "intrinsic randomness" • Before taking any measurement, make sure you can identify how the results would be decision-relevant. Avoids illusory empiricism • Research shows betting improves calibration, and pretending to bet improves it almost as much • Asking for flaws in estimate also improves calibration • Choosing an estimate and then producing bounds is very likely to be overconfident because of anchoring. Better strategy is to treat each bound separately - for a 90% CI, we must be 95% sure it's above the bottom bound and 95% sure it's below the top bound • Author uses all sorts of weird values like "Expected Opportunity Loss" (expected value but only including the negative case), which then forces all sorts of weird stuff (ignoring upside in calculating info value) and arbitrary distinctions (EOL for distributions) in what seems like a hobbled version of expected value calculations ○ These might actually be more useful than stuff like expected value that is supposed to be "rational in general" since the thing you're trying to maximize isn't performance, it's points with the boss, and so loss aversion comes in. Maximizing that explicitly is hard, so these things that seem counterintuitive to me might actually be adaptations • The mean and variance of true power laws will never converge to a finite value as sample size increases • When doing surveys to estimate a numerical value, avoid giving options. Scale of options influences responses (anchoring) • Experts tend to increase their confidence with more analysis even when actual performance doesn't get better • "Lens model" uses experts judgement of hypothetical scenarios to extract linear weights via regression. In application, often does better than the experts that contributed to it

4out of 5Hamish Seamus–The only other book I've read which justified its length was Decisive by Dan and Chip Heath. Strategy: 1. Make a list of factors you think are relevant 2. Convert each of these features into a z-score 3. Add them up to get an overall ranking Before measuring something answer the following: 1. what decision does this support 2. what observable consequences does it have? 3. how does this matter to the decision? 4. what is current level of uncertainty? 5. what is the value of additional information? N The only other book I've read which justified its length was Decisive by Dan and Chip Heath. Strategy: 1. Make a list of factors you think are relevant 2. Convert each of these features into a z-score 3. Add them up to get an overall ranking Before measuring something answer the following: 1. what decision does this support 2. what observable consequences does it have? 3. how does this matter to the decision? 4. what is current level of uncertainty? 5. what is the value of additional information? Notes * You can estimate how many fish their are in a lake by catching some at random, tagging them, and throwing them back in. Repeat until you've recaught several fish and then do some statistics. * Amazon added the ability to add wrapping so that it would know how many people are buying things as gifts. Newspapers are given coupons so that retailers know what newspapers their customers read. * Look out for basic questions/measurements which might obviate any further investigation. * McNamara fallacy: "The first step is to measure whatever can be easily measured. This is OK as far as it goes. The second step is to disregard that which can't be easily measured or to give it an arbitrary quantitative value. This is artificial and misleading. The third step is to presume that what can't be measured easily really isn't important. This is blindness. The fourth step is to say that what can't be easily measured really doesn't exist. This is suicide." — Daniel Yankelovich "Corporate Priorities: A continuing study of the new demands on business." (1972) * Statistician David Moore said "if you don't know what to measure, measure anyway. You'll learn what to measure." This can also be characterised as a "measure first, ask questions later." school of thought. * To IQ skeptics: if we can measure a decrease in IQ due to lead poisoning, are you saying we should ignore this or that it isn't real. * The Rasch model gives you a way of assigning scores to different people who did different tests. Or something. You just have to add log odds. Or something. * Brunswick Lens model: look at how experts make decisions, model their decision making with a linear model, and the result will generally be at least as good (it removes inconsistency). If you give experts a bunch of real or made up instances and get them to predict labels, then you can create such a lens model. If you give experts the same instances several times you can estimate the error due to expert inconsistency (which will be removed by using the lens model). * Black-Scholes model is how to correctly price stock options * Scientometrics is something I should read into. Plus here's a quote from Night by Elie Wiesel which is in my notes for some reason: * 'At last he said in a weary voice "I've got more faith in Hitler than in anyone else. He's the only one who's kept his promises - all his promises - to the Jewish people."' - Night, Elie Wiesel