Computational complexity theory studies the feasibility of solving and resources required to solve computational problems and is useful to any field that thinks about the analysis and design of algorithms (which is much more broad than one may first think). While there are a good bit of notes and lectures available online, these are scattered across university course pages, YouTube, etc. This guide aims to bring this material together for learning computational complexity theory at the introductory graduate level, especially for those without a formal CS background.
Frank Ramsey (1903-1930) was a Cambridge philosopher who made significant contributions to a variety of areas on the outer boundaries of philosophy. To name a few well-known ideas to which he contributed: Ramsey theory in combinatorics, subjective probability, and intergenerational welfare economics.
mobilityIndexR is an R package for calculating transition matrices and indices to measure mobility within a sample. For instance, tracking the income of a cohort over some period of time allows one to measure the economic mobility of that cohort, and tracking the grades of students in a class allows one to measure grade mobility. This post is an invitation to the package and an introduction to the ideas it implements. For a general introduction to economic mobility, see Further Reading at the bottom of the post.
The idea of this post is to introduce and discuss several interesting research programs from the past decade. A research program (or programme) refers to a common thread of research that shares similar assumptions, methodology, etc. The list below contains a variety of research programs: some on topics that have broad appeal e.g. explainable machine learning and mental disorder; others moved the direction of entire industries e.g. advances in computer vision and cryptocurrencies; and others still are more niche areas that I happen to be deeply interested in e.g. topological learning theory, privacy attacks on ML models, and graph-theoretic approaches to epistemology.
Below are the top books I’ve read in 2020. This year, there seems to be no particular theme; however, each of the three books below are rather short and readable. The History of Phlogiston Theory aims to dispel myths about phlogiston theory and provide a brief history of chemistry in the 18th century as the field moved from alchemy to quantitative methods. The Abraham Dilemma develops a theory of delusion as a mental disorder, with a focus to the peculiar complications of religious delusion, informed by the experiences of clinicians and patients. Finally, Libra Shrugged dives into Facebook’s (currently unsuccessful) attempt to launch a cryptocurrency at a worldwide scale.
Below are the top articles I’ve read in 2020. This year’s list contains a nice mix of types of articles. A prominent theme in the list is economics and economic methodology with A Theory of Optimum Currency Areas, Economic Modelling as Robustness Analysis, and Thoughts on DSGE Macroeconomics. The Theory of Interstellar Trade is an oddball article from a young Paul Krugman. An introduction to (algorithmic) randomness is an excellent invitation to a technical area of mathematical logic, and Comments on Economic Models, Economics, and Economists is a fun and effective book review on methodology. Finally, there’s four thought provoking articles across political rhetoric, machine learning privacy, social contract theory, and international relations with The Paranoid Style in American Politics, Stealing Machine Learning Models via Prediction APIs, Self-organizing moral systems, The End of Grand Strategy.
Economic Methodology Meets Interpretable Machine Learning - Part IV - Current State of Economic Methodology
This post looks at the current state of economic methodology with respect to the realistic assumptions debate. After briefly surveying the history of economic methodology, we’ll walk through two recent arguments in the realistic assumptions debate: one in favor of instrumentalism as a theoretical ideal and the other favoring a limited form of realism in practice. In light of these arguments, I’ll argue that practitioners can still adopt some form of limited realism in practice if such an approach is a expedient guide to creating models with desirable properties.
Economic Methodology Meets Interpretable Machine Learning - Part III - Responses to Friedman's 1953 on the Realism of Assumptions
This post discusses three responses to Friedman 1953 (which we introduced in Part II). Friedman’s contention, termed the “F-Twist” by Samuelson, is that economic theories should be evaluated only on their predictions within some specified domain. The F-Twist puts Friedman on the instrumentalism end of the realism of assumptions debate. The responses by Paul Samuelson, Stanley Wong, and Dan Hausman discussed below provide various lenses though which to view the problem of the realism of assumptions and, ultimately, in my view, renders the F-Twist untenable in isolation.
This post introduces Milton Friedman’s 1953 essay The Methodology of Positive Economics which takes the position that economic theories should be evaluated only on their predictions within some specified domain. This article has been called “the most cited, the most influential, and the most controversial piece of methodological writing in 20th century economics” and plays the foil (and occasionally the bogeyman) in much of the economic methodology literature. This is so much so that it is often referred to as Friedman 1953 or even F53.
I recently moved to a System76 Darter Pro running Pop!_OS 19.10 as my primary laptop (review coming soon). As you might have guessed by the version number, Pop!_OS is System76’s fork of Ubuntu. With this move, I switched to Shadow as my cloud gaming service, since they have a supported Linux client - no messing with Wine, dual boots, or VMs required! The Shadow Linux client is built to be compatible with 18.04+ but didn’t work right away due to some Video Acceleration issues. Below is a guide for getting Shadow running on 19.10 based on my experience troubleshooting.
I’m currently working through Raymond Smullyan’s The Gödelian Puzzle Book and came across a fun problem that serves as a good starting point for new readers of Smullyan. Smullyan is well known for (among many other things) producing several books of logic puzzles that introduce ideas from mathematical and philosophical logic in an accessible but still technical way. These books are often formatted where each chapter has an introduction to the relevant characters, ideas, and setting, several problems for the reader to work through, and the solutions to the problems. I’ve found that working through Smullyan’s books helps build mathematical intuition better than repeatedly walking through proofs and applications of theorems. I would much rather go through Smullyan’s Gödel book than ever see Enderton’s Chapter 3 again!
Economic Methodology Meets Interpretable Machine Learning - Part I - Interpretability, Explainability, and Black Boxes
This post is the first entry in Economic Methodology Meets Interpretable Machine Learning and briefly introduces the ideas of black boxes, explainability, and interpretability for machine learning models and offers arguments for and against deploying only interpretable models in the wild when interpretable models are available. The debate over interpretable models in machine learning is far from settled and has been getting much attention in recent years.
In this series of posts, we will develop an analogy between the realistic assumptions debate in economic methodology and the current discussion over interpretability when using machine learning models in the wild. While this connection may seem fuzzy at first, the past seventy years or so of economic methodology offers many lessons for machine learning theorists and practitioners to avoid analysis paralysis and make progress on the interpretability issue - one way or the other. But first, what’s going on with these two debates?
Here are the top three books I’ve read in 2019, presented below in chronological order by year published. While quite the cliché, the theme that emerged this year is to not judge a book by its cover. While Measure and Category by John Oxtoby appears to be a terse math treatise, it is a short, well-paced, lucid read (though requiring some prerequisites). Braudel’s The Structures of Everyday Life digs deeply into the minutiae of common experience in early modern Europe rather than providing overarching historical narrative. To finish, Haskel and Weslake’s Capitalism without Capital is a well-researched - if at times dull - look at intangible assets from an economic perspective whose title reminds one of a political polemic.
Below are the top eleven articles I’ve read in 2019. A theme of methodology runs through this set of papers, especially statistical methodology. There’s also some fun miscellany mixed in with blockchain (whose craze seems like a lifetime ago now), unicorns, and the history of the English language. To my surprise, all of these articles are from the present decade. They are presented in chronological order.
When approaching measure theory for the first time, the ideas can seem opaque and unmotivated. This is amplified since many students of measure theory are not coming from a strictly mathematics background and may be approaching the material on their own outside of the classroom. In addition to first-year math graduate students and advanced math undergraduates, students in stats, economics, the hard sciences, etc. will find their way into learning measure theory. This is a guide to resources for learning measure theory that tries to keep in mind that many (myself included) approach the material with an atypical background.
JSON is the typical format used by web services for message passing that’s also relatively human-readable. Despite being more human-readable than most alternatives, JSON objects can be quite complex. For analyzing complex JSON data in Python, there aren’t clear, general methods for extracting information (see here for a tutorial of working with JSON data in Python). This post provides a solution if one knows the path through the nested JSON to the desired information.
In epistemology, we often think of the things we believe as discrete propositions. For instance, you may believe that there is a computer screen in front of you. But how is this belief justified? One way of justifying a belief is by offering a reason, which can itself also be a proposition. For this next proposition, we can then ask how it is justified and so on. The regress problem asks the following question: if any of the things we believe are justified, then what is the structure of that justification? Does the justification question not just keep getting passed backward forever with reasons for reasons for reasons?
Here are the top three books I’ve read in 2018. They are presented below in chronological order. While these three books seem rather disparate, they are bound together by themes of innovation, conflict, and ideology.
Here are the top eleven papers I’ve come across in 2018.$^*$ These papers are mostly recent publications (within the last two years) with some older ones peppered in. They are in chronological order below.