86 Comments
User's avatar
James Fodor's avatar

While I am very sympathetic to Onid's argument here, ultimately I'm not sure its a very good counter to theists. They will simply deny that such mathematical or computational approaches are applicable to God. In particular, there is no proof that everything that exists is computable, so they can simply deny the completness of Turing computation. They can then define an alternative notion of simplicity based on the number of novel substances or properties they need to postulate for God to exist (or whatever other claim they want to make). I think this approach is hand-wavy and unconving, but I don't think you can refute it by pointing out that it is inconsistent with Kolmogorov complexity. They'll simply agree and ask why they should care about that.

Expand full comment
Nathan Ormond's avatar

This is (IMO) the problem with the game Theists are playing with rationalistic proofs and scientific looking analyses. People in the business of offering these often want to claim the rigour and credibility of scientific subjects by deploying their notations and lexicon, but they operationalise the terms in ways that aren't connected to pragmatic success in the way that theories are in the sciences/engineering (i.e. in terms of world control). The problem then becomes that what constitutes success for the deployment of these terms in theology/philosophy is nothing more than being taken seriously by your peers; the rubber never meets the road like it does in STEM disciplines (or even political theorising). As such, it's very easy to modify your theory, introduce nebulous entities "an ideal language that I can't tell you anything about other than that it does exactly what I need it to do to avoid your criticism", such a modification is absurd in any other discipline because it's entirely useless, there are no consequences to whatever you say either way, the entire discourse is a castle of air -- there is no way to put this to work in experiments, engineering, predictions for observations (because it's a completely speculative, infinitely flexible unknown). And these "special" ways of using terms like simplicity (metaphysically, theologically, i.e. "divine simplicity") then leads to unanlysable, meaningless or false claims like that having "infinite/unlimited power" (whatever that is even supposed to mean) is simple.

Expand full comment
James Fodor's avatar

I agree. But when your criterion for success of an argument or theory is simply 'makes sense to me on my armchair', then you don't care about any of that.

Note that good philosophy isn't like that; it should interface with empirical sciences and help theoretically guide research programs. Philosophy of religion doesn't.

Expand full comment
Nathan Ormond's avatar

Yeah, we're in agreement about this for sure.

I also agree there is good philosophy that isn't like this, however, when well-operationalised the scope/limits of that philosophy is very different from "big"/"deep" metaphysical questions -- because we (aim) to ask well formed questions that are susceptible to empirical analysis making them fall under the purview of the sciences!

Expand full comment
Abe's avatar

This is why I appreciate that this was written as a response to Bentham's Bulldog. He understands the value of parsimony, and has argued for God as a simple answer to fine-tuning and other mysteries. He may not accept K Complexity as a worthwhile conception of what it means to be simple/complex, but I suspect he would engage seriously with the argument.

Expand full comment
Nathan Ormond's avatar

Does "engage seriously" mean say the sentence "I find X implausible therefore I prefer my view that an infinitely flexible theiry I can say nothing about the mechanisms of is the most simple" and just refuse to interrogate those "intuitions" -- is that serious?

Expand full comment
Abe's avatar

Well, like Onid wrote, that was likely written without much thought, it was a reply to a short comment after all.

Has he said anything about this piece, I wonder?

Expand full comment
Onid's avatar

He does have reply in comments section here, but sadly I did not get the impression he had carefully read what I wrote.

Expand full comment
Nathan Ormond's avatar

I guess it's reasonable to assume something like that, but having read most of Matthews writing and watched his though on these topics over the years, no! i.e. I don't think that he has anything to say on this topic that isn't hand waving, question begging, superficial and contentless or highly speculative.

Expand full comment
Abe's avatar

I suppose I'm just more optimistic about his (and other theists who get there via similar reasoning) ability to grok the consequences of offering a vastly, impossibly more complex account of reality, and of K Complexity as a valid and potent measure of same.

But you may be right, I haven't read many of Matthew's writings on this topic.

Expand full comment
Nathan Ormond's avatar

Well is it more complicated or less, because theyre arguing its simpler!

Look, Im an anti-rationality guy, I dont believe in ultimate theories, that K-complexity really, ultimately explains things or whatever. What I do know is that all Matthew is offering is an illusion. He has built a text book example of a bad theory that does nothing, explains everything and can accomodate any observation possible or impossible. It has no content. All of the "rigorous" methodology of using statistical notation (or, worse, mentions of statistical methods) are hand waving and non empirically driven. In my mind it is exactly on level with Young Earth Creationists who know more about DNA than your average person in order to lawyer for their conclusion...

At the end of the day, he doesnt have anything to say except that some bizarre claims about metaphysical simplicity "seem" (in some self justifying blank cheque way impervious to criticism) to be exactly the right ones, and they just happen to be the exact ones you need for Theism to be true.

And it's not like I think Matthew is stupid. What annoys me is he is wasting his life on this crap when he could be putting it to the end of something more useful. To me it sometimes seems like what he gets out of this whole thing is the intellectual thrill of just defending more absurd views and winning against people less intelligent than him.

Expand full comment
Daniel Greco's avatar

Really nice piece. I wonder what you think about this way of glossing the "why care about K complexity" for people like Bentham and Simon Laird above, for whom it doesn't seem like the notion of complexity relevant to theory choice.

A big part of the scientific revolution was the decision to focus on quantitative, mathematical descriptions of nature. E.g., rather than thinking of matter as having a bunch of Aristotelian qualities (e.g., sweetness), we throw all that stuff out--at least when doing science--and conceive of matter in quantitative terms like mass, position and velocity, which allow us to formulate precise mathematical theories of the behavior of matter.

To understate by a whole lot, this strategy was very successful. You don't get Galileo, Newton, etc., without that methodological shift.

Now, if you want a story about complexity that can play a role in scientific theory choice--something like Ockham's razor, where you prefer simpler theories to more complex ones, at least all else equal--it should be a story about complexity that can evaluate the complexity of the sorts of precise, quantitative mathematical theories of the sort we learned to value in the scientific revolution. K complexity is suited to the job--Solomonoff induction is built on the idea--while "vernacular complexity" is not.

This isn't exactly going to be persuasive, since my sense is that theists will often think that the Galilean lesson was overlearned. Focusing on quantitative, precisely describable aspects of nature was a good methodological innovation, because those are the parts that are easier to learn about, but there's no good reason to think that's all there is; there are more things in heaven and earth that can fit into mathematically precise, quantitative science. (This is a theme in Philip Goff's book, Galileo's error.)

While I don't think it was an error--I'm with the materialists--I don't think that's a simple (haha) argument to make.

Expand full comment
Reader's avatar

Excellent. Thank you for this! In my view, theists’ reluctance to grapple with these questions is evidence that their primary strategy for arguing for god is essentially an unserious appeal to magic. They raise their hands in exasperation, “It’s all so complicated and improbable! Why is all of this crazy world this way and not some other way?! I know, god did it! He can do anything, by definition. Phew, mystery solved. Don’t ask me how he did it; he’s infinitely mysterious and beyond all comprehension or reason!” God to them is a “magical incantation” (as you put it) to “make sense” of any and all phenomena. Of course, almost by definition, making sense of things by attributing them to magical, not-understood entities is not actually to make sense of them at all, as doing so increases understanding in no substantial way whatsoever.

Expand full comment
JD Free's avatar

Regardless of whatever clumsy arguments you may have encountered from particular people, complexity is not an argument against God; it's an argument for God. The more complex the world is, the less plausible spontaneous order is, and the greater the intelligence required to have designed it.

"Something smarter than us designed it" IS the most rational explanation for the existence of something that's beyond our ability to design, once "it spontaneously appeared out of nature" isn't plausible. Whether you find that explanation satisfying is irrelevant.

Expand full comment
SorenJ's avatar

Very well written!

I am of the opinion that the issues which you wish to ignore in the spirit of the general idea that universe can be calculated by a Turing machine are not actually so trivial. You discuss this in more depth in your footnotes. I really don't believe that the universe is comparable to a calculable function (I am saying this a physicist.) In your footnotes you also say that the simple thing to do would be to simulate a multiverse -- but this introduces a whole 'nother can of worms!

I have also felt the same about some of Bentham's arguments. I pushed back on one his articles making the point that I really don't think the universe or multiverse or space of logically possible universes could be described by a sigma-algebra. So, his Bayesian arguments don't seem very convincing to me if you can't even describe the sample space rigorously.

Expand full comment
Onid's avatar

Ah yes, I had been afraid a physicist would reply. Alas, you are probably right.

But here is my partial counter-argument: the entire universe may very well not be calculable, but many (all?) of the known physical laws are individually calculable. As in, if you ignored the “main loop” of the program, it’s still pretty easy to imagine something like an “evolve_quantum_state()” subroutine which is very easily calculable.

As for the multi-verse, I assume at least one of issues you’re referring to is the measure problem? If so, I tried to address this a little too. While you couldn’t cover *every* multi-verse, I think you could pretty easily loop over some finite set of universes such that at least one of the ones you hit was our own.

But either way, even if trying to imagine a universe simulation is a little silly, I hope the general ideas still stand.

Expand full comment
SorenJ's avatar

Yeah, I don’t know if what I wrote really refutes your main point: the universe may not be calculable, but it is *still* the case that “the universe” is simpler than “the universe” + “god.”

There is the measure problem for the multiverse in the sense that the sample space set may be too large, but I also had in mind Bertrand’s paradox (named after Joseph Bertrand.)

Expand full comment
Onid's avatar

Thanks!

I’m curious how you were thinking Bertrand’s Paradox might apply here? I think the partial solution - where we sample a finite set that includes our own - still applies here. Or, in relation to Bertrand’s, it doesn’t matter how the probability distribution is constructed as long as our universe is within the support.

Expand full comment
SorenJ's avatar

Hmmm let me see if I am making sense… You are right that it doesn’t affect your ability to sample whatever slice of sample space you want.

Either you are taking the multiverse as a probability distribution but imagining there is only one actual universe, or you are imagining that every universe in the multiverse actually exists.

If it’s the former, and you are trying to make a fine-tuning argument, or just trying to think about probabilities in general, then you still need to pick a measure, but it’s not clear how to do this in a non arbitrary way.

If it’s the latter, then your ontology should still match your Turing machine, so you need explain why certain universes with physical constants near certain values appear with more or less relative frequency. I think you still need a measure.

In other words, the Turing machine may be perfectly adequate for sample a finite set of universes within the multiverse, but if the multiverse is actually reality, then it needs a measure, and somehow your Turing machine needs to reflect that.

Expand full comment
Onid's avatar

Ah yes, that does make sense. I confess shortly after I made my comment I realized that if it is the first situation - where the other universes are “virtual”, then you still need a way to specify which is the “Real” one we care about.

Oh well. Not sure anything can be done about that.

Expand full comment
John Encaustum's avatar

For better and worse this argument doesn't work, though I don't like Bentham's Bulldog's argument either. Another physicist has made the point already, but I'll state it again in case it's helpful to have another phrasing.

Turing machines do not support infinite precision reals or indeterminism and physical law does appear to support both, so (1) it's much harder to write the current laws of physics in terms of Turing machines than it may seem and (2) the usual Turing-universality-based Invariance Theorem does not apply to the computations that physics appears to do. Granted, in future theories there may be a fundamental discretization of spacetime and also determinism (even superdeterminism) of the laws of physics, but it's not what we have for now.

Expand full comment
Onid's avatar

Sadly, you’re right. This was a case of brushing off a technical detail that turns out to be quite important.

I still think my partial response to the other comment holds - the individual laws as sub-routines can still have low complexity. I’ll also point out that both of the issues you mentioned - indeterminism and infinite-precision reals - are issues that I addressed specifically in the footnotes, and I don’t think either one prevents us from running a simulation. The bigger issues are measure problems, like needing an infinite number of multiverses or the ability to handle an infinite unbounded spatial span,

That said, I have spent the last few days thinking about this. There are models of computation which can handle infinities in different ways, such as real RAM or infinite-time Turing Machines. Neither of those would help us with this particular issue, but I’m curious if there’s a model that might, either in existing literature or even as something novel.

Expand full comment
John Encaustum's avatar

Sorry if I was too brief to be clearly categorical: whether or not the laws could be simulated at all, they cannot be simulated efficiently and exactly, and those are hard barriers to applying an Invariance Theorem argument. The various quantum, random, and memory-space computational complexity classes nest like they do for this reason.

Infinite numbers of multiverses are, maybe counterintuitively, generally less of a problem for efficient and exact simulation than infinite precision real numbers subject to chaotic nonlinear dynamics. Physical laws for infinite precision real number fields on even the tiniest finite domains of an infinitely divisible spacetime will immediately embroil you in dealing with infinite dimensional spaces and their nonlinear operators.

It is true that different models of computation can handle infinities in different ways, but then you'll get into potentially mystical constructions like BB's where there could be inequivalent choices of super-UTM oracles built into the different programming languages, and so then the Invariance Theorem argument doesn't generally hold because the inequivalent oracles may not be able to simulate one another efficiently.

Sorry as well if I'm being a buzzkill, here. You're asking good questions and you're on the right track for unpacking and understanding a lot of abstruse computational complexity theory!

Expand full comment
Onid's avatar

Maybe I’m being foolish for pushing this, but is efficiency really a requirement? No matter how chaotic or ill conditioned, there’s going to some number of digits of precision such that all computation is accurate within some desired range.

Obviously that number would be insanely large and insanely inefficient, maybe even incomputable to find, but so what? Finding the true minimal encoding that determines k complexity is also incomputable. And we can still describe (rounded to some small number of significant digits) any arbitrarily large number relatively compactly.

I don’t think we need anything like BB numbers either. The problem with those (as I see it, I’m possibly missing the point you’re making) is that we can’t compute their specific value because it requires solving of the halting problem. But we don’t need a specific value, just some absurdly high value.

Anyway, I think it’s almost certainly the case that we can’t solve this problem, but I’m still not convinced enough to let it lie.

(By the way, the model I was imagining was a Turing machine with something like infinite parallel threads, not an oracle model. If you know of something like that in the literature please let me know. I’ve been thinking about it a little but haven’t come to any interesting conclusions yet)

Expand full comment
John Encaustum's avatar

It's not foolish to be asking any of these questions, imo, but I meant efficiency in exactly the sense required for the Invariance Theorem, so yes, it's a requirement for the argument.

In physics, big future effects frequently depend on small difference in current state (as you're already aware, but I'm repeating it in case you've missed an implication) so the strategies to "just choose a number of significant digits" don't tend to work well in computability proofs: that's why I mentioned chaos.

My mistake on abbreviating "Bentham's Bulldog" as "BB" where that was ambiguous! Busy Beaver numbers weren't things I meant to bring in at all, and yeah, I'd argue against myself if I were bringing those in there, too.

Turing machines with infinite threads are a great thought experiment and I do think those are in the literature but I don't remember the keywords for literature search there off the top of my head, sorry. Oracles usually end up being a more productive way forward and they're more standard iirc, but definitely follow your intuition where it leads. I think it's always better to follow through on what you're spontaneously motivated to explore than to take on faith that different directions are more important because they're better traveled.

Expand full comment
Dr Brian's avatar

Hi Onid - Thanks for this writeup. I'm a Comp Scientist myself, so I enjoyed seeing Kolmogorov Complexity used here.

I do have a question, and am interested in your view on it. The question is whether one must also encode the starting condition (state of the singularity) when measuring the K-complexity for your simulation of the universe, and if you do, how one might compare that additional factor to the K-complexity of god?

If I read your argument correctly, you are saying your universe-simulation must encode the rules of physics, and then "the output of our program needs to be an exact description, down to every sub-atomic particle, of exactly how the universe looks at every possible moment up until a pre-defined end point". You suggest that "the code itself would still be short - the universe would not be especially hard to describe"

However, I believe that the very particular universe we now live in is defined by both "the laws of physics" and also "the starting point", ie, the original singularity including every last non-uniformity that gives rise to the specific configuration of our present universe. I am aware that one might want to claim that a "singularity" is by definition easy to describe: mass, rotation, charge (like a black hole). But I don't believe physics claims that any random singularity (acted upon by the laws of physics) would have necessarily resulted in our particular universe. Instead, either a) our singularity had important and unique non-uniformity embedded within it, or b) there were important (random?) quantum non-uniformities introduced immediately after the big bang. Regardless, after 14 billion years of faithful application of physical laws, we are now having this conversation in our particular universe due to these non-uniformities. If the singularity or subsequent flux had been ever so slightly different in any way, I'm guessing Earth would not exist and we would not be having this conversation. If there were no non-uniformities, then the universe today would be a purely uniform and undifferentiated sea of energy/matter.

Lastly, I'm guessing there are an almost infinite number of possible singularities or fluxes - all of the universe's energy-matter in all its possible singular configurations - that would need to be differentiated, and one particular one encoded as part of the K-complexity for the simulation.

I'm thinking that a) the encoding of the singularity's configuration is an essential term in the K-complexity of the universe, and b) the encoding is not small and may be essentially infinite.

I agree with you that a K-complexity encoding of god is difficult to contemplate (and it is a good challenge to deists to come to terms with that). But it seems hard to claim that the K-complexity of the purely physicalist universe is not also quite hard to contemplate.

For the record, I'm a physicalist. See for example https://brianbinsd.substack.com/p/the-simple-flaw-in-chalmers-argument

Expand full comment
Onid's avatar

Thanks for your comment.

1. Regarding initial conditions, you are correct - that would need to be encoded. My original suggestion from the footnote would be that you could iterate over that, just as you could for the universal constants like the speed of light - just try all possible universes. That would suggest a multiverse.

Of course, this has a bit of a measure issue - how do you select the set of possible initial conditions? There isn’t really an answer to this and it’s the main thing holding everything back.

2. You are also correct regarding the problem of infinity. My argument was predicated on the assumption that the universe is finite. If it is not, all of our ideas break down.

3. Unsurprisingly, I have discovered since writing this that several other thinkers of much higher caliber than myself have explored this idea, and thought through the details much more thoroughly than I have. Evidently, though I haven’t read it it seems Tegmark’s book on the Mathematical Universe Hypothesis makes both of the points I made above: I believe he stipulates the universes of his hypothesis must be finite, and acknowledges that there is a measure issue involved. Similarly, I found this paper from 2000 by Schmidhuber, a fairly influential Computer Scientist, who basically asks the same “K complexity of the universe” question I did, complete with expanded models of computation better suited to handling the precision issues that I discussed in my footnote: https://arxiv.org/abs/quant-ph/0011122

Expand full comment
Dr Brian's avatar

Thanks for the references and the exchange

Expand full comment
Zinbiel's avatar

Great post.

Nothing to add, but I noted a couple of typos:

--Basically, there’s a certain list of minimal commands that a valid (the techincal term is “Truing Complete”)--

Expand full comment
Onid's avatar

Thanks. Sadly I’m terrible at proofreading, particularly when it comes to transposed letters.

Expand full comment
Gavin Pugh's avatar

I really liked this, I hope you write more!

1 question and 1 formatting suggestion:

Q: WRT footnote 6, why should the training data for a brain need to be included in the code? Why couldn't it, like the LLM, be reduced to the resulting neurological equivalent of weights?

Formatting suggestion: Footnote 5 is long enough that it doesn't display well when I hover over the footnote. It think it would be easier to read if you formatted it like this:

...that would be just one(5) of(6) many(7) possible barriers to using conventional computation

(5) The first problem is our choice of numerical precision...

(6) The second problem is quantum randomness...

(7) The third problem, however...

Expand full comment
Onid's avatar

Thanks for the feedback! I’ll keep that formatting suggestion in mind.

Footnote 6 was originally a lot longer and I cut it down at the last minute, apparently without rereading it carefully. You are right, that most likely would be K-Complexity of the brain, and I had meant to write that. though of course even that encoding would not be short and if we were talking about God’s mind it would certainly be massive.

What I was trying to suss out when I planned to write that footnote was the idea that while an explicit description of the final outputs (e.g. the weights of the LLM) might be the true K complexity, that’s not actually a reflection of the process gave rise to the outputs, which is of course far more complex in that it requires training data.

Ultimately, I got a little lost trying to figure out how that argument could be made more rigorous, or if the gap even mattered. In the end I decided to cut it but apparently was not careful about what I removed.

Expand full comment
Bentham's Bulldog's avatar

I'm aware that if you hold that to fully describe something you must describe it in math, then minds won't be simple. I'm also aware that whether minds are simple will depend on the language. But so what? In my view, simplicity is about how simple something would be fully describable in some ideal language that has a single operator for fundamental entities--like minds! I don't care how easy it is to describe in other languages.

Expand full comment
Onid's avatar

But why should *I* care about the “ideal language?” Why should any atheist?

Or to put it another way: Any language can describe literally any concept in a single word. It’s a trivial statement. Why should the word for mind have any more meaning in your ideal language than the existing word “mind” in English?

The existence of a word to describe a concept doesn’t automatically make the concept well-defined, and it doesn’t make it convincing. Even the ability to describe something in a language doesn’t necessarily make it any more convincing. I can write “Pi equals 4” in English or really any other informal language, even though it’s obviously false.

The point of formal languages is that that they follows rules, and if you follow those rules you’re guaranteed to get true results (this is called “soundness”). A key point of this post is that, under standard mathematical assumptions, all Turing complete languages are equal, which means either:

1. Your ideal language is just an informal language and saying it has a single word that means “mind” isn‘t going to convince anyone of anything.

2. Your ideal language is a formal language, in which case you need to actually do the work to translate the word for “mind” into something reasonable.

Or, more flippantly: If your language can’t actually be translated into any other languages, then it’s not actually a language.

Expand full comment
Nathan Ormond's avatar

"An ideal language" is a cheque that will bounce... It's a complete cop out, "trust me bro". It's also a completely bizarre claim, if we dont speak this language, how can we evaluate sentences in it or their complexity? Things like shannon encoding are to do with character encodings. Matthews view relies on empty promises, assumptions and occult mental entities for semantic concepts, representations and propositions. What annoys me personally about this is that it's the same overreach here that also underpins his engagement with Bayesianism, "anthropoc reasoning", "metaethics" (ignoring xphil data), fine tuning and the general application of formalism. He has a lot to contribute but these huge blindspots that come from operationalising his ideas of terms like "complexity" in subjects like philosophy that have no standards.

Expand full comment
John Encaustum's avatar

I want to like this, but it’s overstating the case. Ideal languages, for instance languages with oracles for generating random numbers (thus infinite entropy and infinite K-complexity relative to a deterministic language) are genuinely important tools in the study of complexity. BB seems to be a troll building a gruesomely irrational and frustrating coalition to me, but a rational refutation of him needs to be stronger. Try going back to Kant’s Critique of Pure Reason or Sellars’ Empiricism and the Philosophy of Mind. They’re deserved classics and the secondary literature on them almost certainly contains everything you need for a much stronger refutation here.

Expand full comment
Nathan Ormond's avatar

I don't believe that these are ideal languages in the relevant sense. To me, these are simply other languages, no better or worse than English or Piraha depending on your aims as an individual and the context you're in. I believe I'm targeting the point I disagree with.

Expand full comment
John Encaustum's avatar

Alright, I’m also a pragmatic contextual thinker about the relative values of languages, but I meant “ideal” in the technical sense that is relevant to the theory of computation, and in the theory of computation there are meaningful partial orders of representational power among ideal languages. It’s similar to the spirit of Wittgenstein’s language games vs languages proper, and his use of compositions and comparisons of language games to investigate the philosophy of actual language. I still think BB made a point that people expert in the theory of computation will believe your criticisms of it have failed to address cleanly. (Most would also not care much, they’d see misuse on both sides and just move on because it’s common and annoying to talk through.)

Expand full comment
Nathan Ormond's avatar

IMO--unless I've misunderstood you (feel free to fix me)--this view CANNOT be a Wittgensteinian view, as Wittgensteins view of language rejects that language essentially reduces to computation, that meaning has anything to do with computation, there is no fundamental, ultimate, or more or less proper (sans a community of speakers) language.

It's pluralism and disunity!

Logical, Mathematical and Computational languages are just languages -- no more privileged or less than Piraha or Hopi metaphysically.

Expand full comment
Bentham's Bulldog's avatar

It's not just enough that you could have a single term for any concept. The question is if it's justified. Now, in my view, if something is fundamental--like minds if dualism is right--then it's arbitrary to exclude it based on priors. Minds and morality, if they're non-physical, can't be described with math--but it would be unduly prejudicial to declare them to have a prior of zero.

Expand full comment
Onid's avatar
May 5Edited

Fair enough. I did briefly try to address this near the end of the post, though: it seems like you’re trying to have it both ways.

To me and (I confidently believe) most of the people you’re arguing with, complexity and simplicity are mathematical concepts. So if you say to me that you want to me to believe in something that is entirely outside mathematics, fine, but you won’t be able to convince me that thing is simple. If it’s outside math, then it’s also outside the realm of “simple” and “complex.” To me that either makes the thing you’re talking about infinitely complex, or it makes even asking the question nonsensical.

And, again, that’s fine. But if the question is then “is God a simple explanation for the universe?” I think the answer has to be “no.” It certainly can’t be “yes,” at least not when a “no God” explanation *does* allow me to properly define the concept of simplicity.

Expand full comment
Some Lawyer's avatar

I see Bentham’s Bulldog as merely stating an intuitive view—we all experience our own minds, and so projecting other minds into the universe by analogy to our own feels simple (no advanced math needed). Such projections, somehow, seem to psychologically constitute an explanatory grounding—if you know an event resulted from the intention and actions of a mind, it feels like you explained it fully. Hence, theism (and animism, and conspiracy theories…).

I actually think this mind projection/intentional stance does a shocking amount of work in our intuitive/informal thinking. Perhaps worth writing on sometime.

Expand full comment
John Encaustum's avatar

This is almost exactly the starting point of Kant's Critique of Pure Reason and that book develops the ideas very profoundly, grounding a ton of good secondary literature and innovative later work as well. Definitely worth writing on!

Expand full comment
Phil's avatar

I think I agree with you in one respect, that a perfectly ideal language does not exist (or at least may be inconceivable). I don't think the simplest concept is in any way expressible in any formal language, but I guess I could be wrong in that view.

What might be a good candidate for the simplest concept? In mathematics, we might have a certain way of framing or defining simplicity, but unless your view is that only mathematics can give a coherent definition of simplicity, it may be the case that the simplest concept is not definable in any formal system. I guess you could also go the route of denying that there is any such ontological simplicity outside the scope of mathematical abstraction, but then nobody is justified in using parsimony as an explanation for why their view would be the correct one.

By the simplest concept, I would think of a concept which does not exist (at least not exclusively) within the context of any formal system or framework. Mathematics seems to be a kind of framework that addresses a certain set of (mathematically representable) objects. I would think of something perfectly simple as perhaps being too simple for any kind of representation - something which would be the necessary condition for such systems to be intelligibly formalized in the first place. In such a way, I might imagine the "simplest notion" to be ungraspable, or very difficult to grasp (theists might argue that God is like this).

But I think the standard objection to this view does not have to mention any kind of formal complexity: it is in no way evident that this simple concept, if it exists, is anything like the god described in any of the major world religions. Specifically, it is not clear that this concept would be representing some kind of entity that possesses mental states, desires, or was actually an "entity" at all. It could just be some kind of platonic universal that we are cognitively unable to access or comprehend in any respect. This doesn't really help with the argument from parsimony, though, because I think you would at least have to have a notion of the thing - and if simply knowing what it is is all that is required to know that it exists, then there's no argument from parsimony to begin with, since it's existence wouldn't be representable as a probabilistic distribution (it could be the precondition for any kind of probabilistic reasoning).

Expand full comment
Apologetics Squared's avatar

Assuming that K complexity grounds the correct understanding of simplicity, I think it may very well be the case that God does decrease the K complexity of one's overall worldview, assuming one is a moral realist.

A moral realist naturalist, in order to tell the story of everything, will need to describe the moral facts and the physical facts. So, the K complexity of their worldview consists of the K complexity of their account of moral facts plus the K complexity of the physical facts. (I think this would be true even if the naturalist believes moral facts reduce to physical facts, because the naturalist should prefer simpler accounts of how those moral facts reduce to physical facts, which indicates that the K complexity of such an account contributes to the K complexity of their overall worldview.)

But the theist can take a shortcut! They only need to describe the moral facts, plus a single additional thesis like, "there is a Being which always does the most morally good thing." ("This everyone calls God.") The goal here is that this one extra posit with relatively low K complexity lets the theist posit way fewer physical facts as part of their fundamental description of reality. Instead of going through the steps of positing a multiverse to describe fine-tuning, a fine-tuned universe falls out of the fact that such a universe maximizes moral value, and there is a being which does the most moral thing. Of course, the theist doesn't have perfect understanding of all moral facts, so *how* a world like ours fall out of the moral facts can't be cashed out in exact detail, but in these discussions nobody can cash out in exact detail how most of the facts with which we are ordinarily acquainted with like "Australia exists" fall out of a fundamental description of reality. We can only gesture at what the relationship looks like.

(And of course, there is the question of how well the proposition "there is a Being which always does the most morally good thing" fares when considering how much suffering there is in this world, but that is a question about the Problem of Evil, not the simplicity of theism.)

Expand full comment
Onid's avatar

Very interesting reply. However, it seems you have a couple implied assumptions I'm not sure I would agree with:

1. I'm not a moral realist and don't believe in moral facts.

2. Even if there is such a thing, I don't think there would be a straight line between "the most moral thing" and "all knowing all powerful entity that creates the universe." God may be both of those things, but that doesn't mean one implies the other. And if being moral does imply omnipotence and omniscience, then to me that would imply that morality is actually complex, not the other way around.

3. Being able to describe an objective to maximize - like morality - does not mean you can easily maximize it. This is a foundational fact of computational complexity theory.

Expand full comment
Apologetics Squared's avatar

I think historians and detectives use the concept of simplicity in their reasoning. E.g., "this explanation of events is simpler than such and such alternative explanation." But it seems rather dubious to me that historians and detectives are referencing the K complexities of the proposed explanations! This leads me to believe that different fields treat simplicity differently, and there is no ultimate, single definition of "simplicity."

Additionally, I doubt very much that K complexity is the sort of concept of simplicity that should be adopted for metaphysics. For example, is it actually possible to analyze the respective K complexities of Platonism and Nominalism?

Expand full comment
Onid's avatar

Regarding detectives and historians, I disagree. I think they mostly use a form of simplicity that informally captures exactly what k complexity tries to capture formally - how many steps of reasoning are required to come to this conclusion?

As to your second point, you’re definitely right in a technical sense. But informally, I think K complexity illustrates the attitude we should be taking towards complexity.

The main goal of this piece was to explain why I find simplicity based arguments for God u convincing.

Expand full comment
Apologetics Squared's avatar

Thanks for the reply! I would like to offer some pushback. Let's take for example, a scenario where multiple sources all claim that it rained on a particular day several centuries ago. Now, consider the two explanations:

R) It Rained on that particular day.

D) There was an orchestrated Deception to prank future historians into thinking that it rained on that day, when it in fact did not.

What is the K complexity of R? Well, rather high I should think! Rain is difficult to perfectly simulate in a computer. What about D? That would also be difficult to simulate as well, I would think. But arguably there there would be way fewer subatomic particles that would need to be modeled in order to simulate D. This is all further complicated by the fact that if a K-complexity-minimizing multiverse exists, then presumably there is a universe in which R obtains, and a universe in which D obtains!

So while I would agree there is some kind of informal resemblance between "historical simplicity" and K complexity, it seems quite obvious to me that, empirically, K complexity is a rather poor guide to reconstructing historical records.

And while we're on the topic of reducing one kind of simplicity to another, I'd like to point out that if there is a single correct definition of simplicity, and it is, as Bentham's Bulldog suggests, describing something in an ideal language, it would make sense that minimizing K complexity would generally minimize the description of something in an ideal language.

Expand full comment
Onid's avatar

Ah, I think I see the issue. The right way to use (the informal idea of) K complexity in this scenario is not to think about in terms of subatomic particles and physics - I framed it that way in the post because we were literally talking about ontologies: God and existence and the like.

In this case, the right way to use the idea of simplicity would really be in respect to some model of how the world works. Rain, presumably, is a primitive in this description, you say when it happened, where, and maybe how hard, and then you're done. A conspiracy is not - explaining it in terms of primitives - he said this, they did that, all for such and such reason - is not a quick task.

Now, is that formal? Hell no! Is there space to pick a different language where the conspiracy is more primitive? Maybe, but that isn't the language nearly anyone would choose to use a priori.

As for Bentham. The point is that you must pick your language before you decide what specific thing your describing, otherwise there's always a language that will describe it in one word. And if your concept is valid you should be able to translate to other languages as well.

Expand full comment
Kristian's avatar

This was very clearly written, thank you.

I suppose one can philosophically define God as a simple and infinite mind, but that is not an argument for God’s probability.

Expand full comment
David Cruz's avatar

It's not the Christian concept of god that i find self-evident and worthy of circumspection, it's the fact that whether you are an atheist or a theist, you have to come up with an explanation for how something came out of from nothing and what existence even is. If you're a theist you have the even-thornier question of "if God made existence, where did God come from?" When that is beyond our current comprehension taking a position that the least complicated explanation is atheism seems premature.

Expand full comment
Onid's avatar

I agreed with everything up to that last sentence.

I always assumed that once you get to the point of asking where God came from, atheism seems obvious. Think of this way: either way you need to explain where the universe comes from, it’s just that with theism you also need to explain where God came from.

Expand full comment
Noah Birnbaum's avatar

A few points:

1: From my naive and uncertain take, you're overrating how much work K does here. Kolmogorov complexity, like any other way (that I’m familiar with) of quantifying complexity is language dependent, meaning that on a very deep level you can’t just solve the Grue problem and others like it (maybe this is a case like that - see next comment) so easily. In other words, the basic-ness of your simplicity function are merely smuggled in, and there are modeling assumptions here (that I think Bentham wouldn’t like). I think I buy these, but I think you need to do some more work for them.

2. Secondly, I think the reason Bentham thinks that God is very simple is because he believes in some form of mind that is separate from physical stuff (for hard problem reasons), and he thinks that mind is fundamental. This may get at where the language dependency of K becomes relevant -- it’s not that "God is so complex, so beyond all comprehension, that he can’t even be confined by ideas like algorithmic complexity.”

Curious to hear what you think about this!

Expand full comment
Onid's avatar

1. Actually, it turns out Kolmogorov Complexity isn’t language dependent, at least not in a meaningful way. I talked about this in the “What’s in a word?” section of the post, but in case I didn’t do a good job explaining it there, the basic idea is that all “Turing Complete” languages are equivalent - they can be translated between one another. That means you can basically just pick any old Turing Complete language and you’re fine.

One thing I didn’t mention in the essay because I didn’t want to be too technical is that typically we talk about K Complexity for a “Universal language” which is defined like this: First, fully describe the language you’re going to be using, then write the description in that language. This completely circumvents the issue.

2. I think you summed up the issue nicely. I would say the goal of this essay was to clearly state the case against that stance, more so than to actually convince anyone. However, I personally consider the laws of mathematics to be unbreakable, so I don’t really buy that there things outside it. It’s a difference of perspective that can’t really be resolved, but I still felt that my stance wasn’t being properly represented.

Expand full comment
Noah Birnbaum's avatar

1: Hmm, idk the technical stuff here, but I’m not sure this is actually correct - I know that K can’t even solve the grue problem, so I would be surprised if you can solve this stuff. ChatGPT agrees with me here (when I put both our comments in):

They correctly point out that Kolmogorov complexity is invariant up to a constant across universal Turing machines. That’s standard and important. But this doesn’t refute your deeper point: which machine you choose still encodes assumptions about what is “natural” or “basic.”

Saying “just define a universal language and describe everything in it” is true in the technical sense, but it glosses over the philosophical issue of why one language should feel more natural or privileged than another. It assumes the problem is solved once the measure is defined, rather than justified.

Their response is clear, but it’s a bit too quick to assume that technical invariance resolves philosophical concerns.

If you can send something here that makes the point you’re trying to make more clear/ try re-explaining that might be helpful.

2: I think this makes sense, and I’m somewhat (though unsure?) about sympathetic. I would, however, say that you need to do more work for the claim that math is unbreakable, and I'm curious where you’re taking/ thinking from here (I don’t think Tegmark nor LW stuff from what I’ve read have convinced me of this, but any other resources would be great).

Expand full comment
Onid's avatar

Ah. I had not actually heard of the Grue problem before your first comment, and hadn’t given it enough thought. I’ve read a bit more about it now, and I think I can give a more complete answer.

It is true that when measuring the K complexity of something small, grue (i.e. “green before time t and blue afterwards”) is not necessarily simpler than green based on your language choice. However, this does not defeat the theory entirely. In my googling I found this paper [1] which explains why, but it’s very technical and I’m not sure if there’s a plain English explanation of the solution, so I’ll do my best:

I think it’s obvious that there’s an intuitive sense in which green is simpler. If you were to graph the wavelength of light over time, green would be a straight line and grue would have a jump at time t. So let’s capture that intuition and say that there are “natural” and “weird” ways to define things (my terminology, these terms aren’t in the literature): “green” is natural, “grue” is weird.

So let’s go back to K complexity. What the paper I linked basically says is that if you select your language *before* you’re given the thing you want to encode, then as the size of what you’re trying to encode increases in the limit, the amount of “weird” definitions you can take advantage of decreases proportionally, and quite quickly. And since the things we want to encode are “God” or “the universe,” we’re not going to be able to take advantage of weirdness in a meaningful way.

This actually makes a lot of sense if you think about it. Even if any given language has an arbitrarily large amount of weirdness, it will still be finite. So when the things you’re encoding get large or complex enough, they will eventually exhaust all of the weirdness and you’ll be forced to rely on more basic constructions.

[1] https://scispace.com/pdf/philosophical-issues-in-kolmogorov-complexity-5efp629ccn.pdf

Expand full comment
Noah Birnbaum's avatar

Thanks for the reply and taking time to read stuff on grue.

I think this explanation isn’t sufficient probably (though I’m not an expert at this stuff). The intuitive way green is simpler or more natural just doesn’t actually work because it already assumes some definitions - for any argument that you give, the grue person can say the literal same thing for the opposite. The same issues apply for bayesian convergence (Hawthorne 1994), and this type of result is also what implies the no free lunch theorem of learning theory (Sterkenburg and Grünwald 2021). I’d be pretty surprised if this conversation would be greatly different from those.

I’m a little confused on the paper and explantation, but I will take a deeper look another time.

Expand full comment
Onid's avatar
May 9Edited

Hmm…

I can’t say for sure, but I’m not convinced that the issues with “no free lunch” apply here. NFL basically says you need to select an inductive bias, and that all inductive biases will have strong points and weak points. So if your bias is for “green,” then of course you will fail at some future time if the true state of the universe is “grue.”

K-Complexity (and its application to inductive inference, Solomonoff Induction) are not claiming to be inductive-bias free. “Shortest description length” is in fact a form of inductive bias. So then the question is just how to formalize description length. And in the limit, as the things you’re describing get more complex, the descriptions are going to converge.

Here’s one more way to see that: The Invariance Theorem says that there is only a constant factor in translating from one language to another. So if the thing you’re describing is orders of magnitude larger than that constant factor, the effects of that constant will asymptotically disappear. “Grue” is only a problem because its description seems intuitively short relative to that constant.

Expand full comment
Daniel Greco's avatar

I know this thread is old, but just wanted to (hopefully!) clarify something. My understanding is while K complexity is certainly language-relative, the reason computer scientists often aren't bothered by this is that they're mostly interested in claims about limit behavior, and whenever two languages can be intertranslated at some fixed computational cost (ie, there's a finite size translation program), then for the sorts of claims about limits that CS likes to prove, it won't matter which language you pick.

Not sure how that bears on this particular application though. My suspicion is that without saying a lot more, it's reasonable to suspect that a "language" with a single symbol for "god" isn't likely to be a formal language at all, and so isn't likely to be translatable (finitely or otherwise) into other, formal languages.

Expand full comment
Onid's avatar

Undefinability in a formal language was my main point, but it shouldn’t be pointed out that I don’t actually believe that either God or the Universe actually would be a simulation. The point is that if I fixed an arbitrary language *before* I actually wrote out my code for God or the universe, then what would the simplicity be?

Expand full comment
Daniel Greco's avatar

I know this thread is old, but just wanted to (hopefully!) clarify something. My understanding is while K complexity is certainly language-relative, the reason computer scientists often aren't bothered by this is that they're mostly interested in claims about limit behavior, and whenever two languages can be intertranslated at some fixed computational cost (ie, there's a finite size translation program), then for the sorts of claims about limits that CS likes to prove, it won't matter which language you pick.

Not sure how that bears on this particular application though. My suspicion is that without saying a lot more, it's reasonable to suspect that a "language" with a single symbol for "god" isn't likely to be a formal language at all, and so isn't likely to be translatable (finitely or otherwise) into other, formal languages.

Expand full comment
Noah Birnbaum's avatar

It seems like now you are shifting the claim from relying on a formal language to the least computationally expensive language (which is totally fine). While you could do this (I’m unsure if this actually works/ if this is the language you would wanna define simplicity in/ this results in only one language and would be interested in hearing more), you still would need to provide some justification for defining simplicity/ assigning priors this way -- like, some people argue, we live in a simulation.

My guess (though this is very plausibly wrong) is that CS people wouldn’t care about this is because they are assuming something like our standard intuitions about induction, which is exactly what we’re trying to justify here so can’t be taken for granted.

On the second point, if you could translate the term god/ mind as someone needs it to any formal language, my suspicion is that there is some language that there can be a language where it can be a single symbol and other symbols can be around it (like grue being the simplest term and the term green being a disjunctive proposition). Therefore, I’m not sure this is good justification to say that it is likely that god isn’t the right one (albeit this is literally true for all things that can be inputted to a formal language, so it means more that I’m confused and less that I’m taking a strong stance on certain languages over others).

Lmk if you think I’m getting something wrong here.

Expand full comment
Dylan Black's avatar

Related to your note about timesteps in relativity, at least for accelerator physics I’ve never seen anyone worry about the timestep per se, usually you worry about the map being symplectic, such that the Hamiltonian dynamics are accurately simulated, and phase space area is preserved (Liouville’s theorem).

I suppose you could try simulating everything in its own proper time? General relativity is fundamentally local, in that the coordinate map(s) describing space must consist of a set of charts that overlap all space, but not necessarily agree beyond their transition regions. Each coordinate patch has its own notion of time that only agrees locally.

Expand full comment