10 Comments

This post reminded me that the book "Blindsight" by Peter Watts exists; in his own words "a whole book arguing that intelligence and sentience are different things". He argues via sci-fi that humans attribute too much agency to our conscious minds; instead most decision-making is done subconsciously, and our consciousness is just along for the ride. ("If the rest of your brain were conscious, it would probably regard you as the pointy-haired boss from Dilbert.")

Lots of fascinating references for further reading at the end: https://www.rifters.com/real/Blindsight.htm

Metzinger's "Being No One" was a major influence: https://mitpress.mit.edu/9780262633086/being-no-one/

Expand full comment

I appreciate this post and will now make an incredibly pedantic point, not unlike your observation that almonds aren't legumes. But perhaps it has larger implications: The proof supplied above is not Shakespearean and is not prose. It is poetry that uses a few archaisms that code as "olde timey" and thus as Shakespearean to those largely ignorant of Shakespeare. Shakespeare's plays do not generally rhyme but are in blank verse (unrhymed iambic pentameter) and they feature original metaphors and complex word play, much of which is lost on modern audiences that aren't reading footnotes. This poem is doggerel of the sort that my sister-in-law writes for fun family birthday cards. No one would mistake it for Shakespeare or even an imitation of Shakespeare.

Is the lack of understanding what "Shakespearean" means an indicator of something more fundamental than an early stage of development? I suspect not, but I'd like to see those with more knowledge address the question. ChatGPT generates a lot of on-command poetry, but it is all consistently bad poetry--though fun--and rarely, if ever, a reasonable imitation of a prompted style. I don't expect a "Shakespearean" poem to rise to the level of Shakespeare, but it should at least read like something someone in the Elizabethan period might have written.

Expand full comment

I think there's some subtlety here. You're right that, in a technical sense, the proof is very much not "Shakespearean." But it is what a random person off the street might consider Shakespearean, and it's similar to what a random moderately well-educated person might produce if they were asked to make something "Shakespearean."

This distinction is important because that's the level at which LLMs like these operate: their knowledge comes from the internet first and foremost. There is a second stage in sytems like ChatGPT where expert knowledge gets mixed into the system, but that may not fix the issue you're describing. And in that particular example no expert knowledge had been introduced, since it was taken from an earlier version of the model.

Which isn't to say your summary of the implications is wrong, per se. Certainly, current LLMs are mostly good for producing consistent B-, low creativity outputs. But this is just one point on a possibly-steep upwards curve.

Expand full comment

I think the fundamental problem you have to overcome is that human beings have a theory of the mind that is NOT probabilistic in some core ways because those conditions are part of the genetic endowment. Universal grammar is literally encoded in our genes, though we have no idea where. A three year old that has been exposed to orders of magnitude less information than any LLM can immediately, reliably, and unconsciously detect that a language string is asyntactic, including language strings of great length and sophistication, despite never having ever been exposed to a language string of that type before. A few dozen deaf Nicaraguan children spontaneously generated a functioning human grammar despite all of them being linguistically deprived, most of them coming from poverty, and many of them suffering from cognitive and developmental disabilities. They did so against explicit efforts to stop them by adults. How is this possible? Because there is a rule-bound linguistic system, and by extension a rule-bound cognitive system, that is embedded in the neurological system in utero through the process of fetal development. That language capacity, as distributed across the entirety of the human species, is easily the most powerful processing system on the planet; the entirety of the internet doesn't come close. I don't think someone like you is necessarily overstating the capacities of these systems. But you're certainly understating the capacities of the rule-bound, process-bound human language capacity.

Expand full comment

First of all, thanks for reading. A lot of effort went into this post so it's very rewarding to see your response.

As to your points, I think there's two things here. First is the fact that children come with certain knowledge pre-programmed into them. I actually addressed this point in an earlier draft but ultimately decided to take it out for space reasons.

The explanation for it is actually quite straight-forward though: babies aren't starting from scratch. Billions of years of evolution have given them neural configurations that have certain knowledge pre-loaded, and a propensity for other knowledge to be added quite easily. Contrast this to artificial neural networks, which are trained starting from a completely random configuration and therefore must train much longer to gain structure.

The biggest fact in support of this argument is that neural networks are well-known to learn new concepts faster the more pretraining they have. This is such a basic fact about neural networks that a huge amount of AI systems are not trained from scratch, but rather “fine-tuned” from existing networks. Even long before LLMs, when you needed a neural network that could do something specific but didn't have a lot of data, you took a big, already-trained network and just did a small training round on the data you did have.

As to the second part of your point, that this learning is specifically rule-based, that's probably the one area where I won't be able to convince you entirely. I think there's evidence to suggest that rules simply emerge from probabilistic reasoning, and I've made my case for that, but there really isn't going to be any proof. All we can do is agree on measurable standards for what we are willing to call "reasoning," and then wait and see if AI evolves to meet those standards. I think they will.

Expand full comment

Interesting!

Expand full comment

Thank you both so much for this thread.

You two are so obviously well-informed about the topic of AI, I thought I’d try to suss out some potential solutions from you both. I direct a foundation that is big-tech adjacent and so, like many, I’ve been following recent potentially dystopAIn trends with much interest.

For my part, I tend to want to follow the lead of people like Marietje Schaake and the EU; but, when Tristan Harris (in a recent episode of the YUA podcast) mentioned Minister Tang of Taiwan’s efforts as a model for digital democracy, I did a deeper dive into her thoughts on AI. In that regard, there are a couple places where I strongly disagree with Tang.

√ She uses a clever analogy of AI to “fire”—but then, I think, drops the ball on the solution (‘fire can burn down a city, but we didn’t license fire out to five people, we taught 5-year olds how to cook’). I can’t speak to other countries; but, of course, in the US, this isn’t correct—we created robust publicly funded emergency fire departments and, for the most part, I think we do our best to keep 5-year-olds the hell away from fire.

√ She wishes for people/private-sector/public-sector partnerships and that each new generation be allowed to develop ‘media competencies’ as opposed to learning ‘media literacy’ (‘cause each new generation is better equipped to make decisions), which—besides being demonstrably false about the civic competencies of younger generations—I would analogize to teaching each new generation about gun culture by dressing them up as deer and having them run around in the woods. Our experience with Social Media has been such an epic fail, I personally cannot stomach the idea of trusting AI to fix AI like that.

While (it turns out) only a small percentage of Tang’s social activism has been successful in practice, the way her social movement leveraged public data and their “g0v” + Polis + AI digital infrastructure to stave off the pandemic and to reform Uber in Taiwan are undeniable successes. However,Taiwan’s starting point was already a hyper-engaged, hyper-densely-populated general public. And privacy would seem to be completely immolated in Tang’s approach…indeed, basically the entirety of the general public’s data was stolen within their current system.

I just wondered in which direction you two think the solutions to dystopAI will mostly be found: Industry-led (the US approach), Government-led (the EU approach), or People-led (the Taiwan approach)? Or, is this a false choice because we can more easily identify the contours of a hybrid approach?

Expand full comment

Unfortunately, I'm not familiar with the exact arguments you mentioned here, but I'll do my best to reply. Since I don't know much about Taiwan's approach to technology, I'm not really sure what you mean by "people-led". Just looking at the quote you provided, I take the fire comments to be arguing that even though AI is dangerous, the right thing to do is to teach everyone to use it responsibly. If that is what she's saying, then unfortunately the truth is that we may not have a choice. In the next couple years, AI is going to become highly ubiquitous, and I think our capacity to limit it will be very restricted. ChatGPT has a corporate gatekeeper, yes, but other LLMs like Meta's LLaMa are already open source and the smallest ones can be run on a laptop without internet access.

This technology is going to become too common to totally regulate, so at some level, we are going to have to adapt to a world where it exists everywhere. If you have any written resources on it, I'd be curious to see what her exact policy proposals actually are. If it's just an approach to educating the population on this tech, then without knowing more I think the devil will be in the details. It could be useful if done well, but I worry that the adults teaching this stuff won't actually have a better understanding than their students, leading to all sorts of issues.

In general, my thinking on government approaches to AI is still developing, and I'd like to produce a detailed post on it in the near future, so I'll save most of my thoughts until then. Before I do, I will be sure to take some time looking at Taiwan's approach to tech regulation, so thank you for the lead.

Expand full comment

I really look forward to your further thoughts.

One clarification about why teaching 'competencies' as opposed to 'literacy' raises alarm bells for me: In Media Literacy circles, *competencies* are delineated from *skills* by being the binary (Can you do it? yes/no), and necessary-but-insufficient abilities; whereas, skill-builing refers to the deeper understanding to be sufficiently empowered within that technology/medium.

In U.S. Education, this overall trend from skills to competency the past several decades has meant teaching folks the bare minimum to participate in the economy (so, "Civics" became "Consumer Education" in the K-12 system, for example). So, in that context, 'people-led' could become turning everyone loose on it and hoping for the best.

The code for Taiwan's systems are all supposed to be 100% open, so it should be relatively easy to research in depth. I relied on Minister Tang's statements about what they were doing, as well as essays from the U of Nottingham's Taiwan Studies Programme for more context.

Be Well!

Expand full comment

May I suggest you read Eric Hoel’s “The World Behind The World”

My subjective (intrinsic perspective ) takeaway...

AI is no more than an electronic tower of babble...or from his book, “a stochastic parrot”.

...if we ground the study of consciousness in physics...the whole is greater than the sum of its parts.

Science need not deny its own boundaries....

Expand full comment