In his most recent newsletter, Venkatesh Rao pulls up a twitter thread he wrote in 2017 making what he calls an ontological distinction between boundary intelligence and interior intelligence. This was before transformers like GPT-1 began to attract more attention. The distinction Rao makes is central to understanding what current large language models (LLMs) can and can not do. Here is his unedited text from 2017:
1. I'd like to make up a theory of intelligence based on a 2-element ontology: boundary and interior intelligence
2. Boundary intelligence is how you deal with information flows across the boundary of your processing abilities
3. Interior intelligence is how you process information. Includes logic, emotional self-regulation, etc.
4. A thesis I've been converging on is that boundary intelligence is VASTLY more consequential once interior intelligence exceeds a minimum
5. Boundary intelligence is by definition meta, since you're tuning your filters and making choices about what to even let hit your attention
6. I think it is highly consequential because almost all risk management happens via boundary intelligence (blindspots, black swans etc)
7. Interior intelligence is your poker skill and strategy. Boundary intelligence is picking which table to sit down at
8. Interior intelligence is reading a book competently, extracting insights and arguments. Boundary intelligence is picking books to read.
9. Interior intelligence is being a good listener. Boundary intelligence is deciding whom to listen to.
10. Basically, better input plus mediocre IQ beats bad input and genius IQ every time, so boundary intelligence is leverage
11. And obviously, boundary intelligence is more sensitive to context. The noisier and angrier info streams get, the more BI beats II
12. Most of boundary intelligence has to do with input intelligence, but output intelligence becomes more important with higher agency
13. Output intelligence is basically the metacognition around when/where/how/to-whom/why to say or do things you are capable of saying/doing
14. We think a lot about external factors in decisions, but output intelligence is about freedom left after you've dealt with external part
Next, from the abstract of a forthcoming paper by Yadlowsky et al. Rao extracts the following:
…when presented with tasks or functions which are out-of-domain of their pretraining data, we demonstrate various failure modes of transformers and degradation of their generalization for even simple extrapolation tasks. Together our results highlight that the impressive ICL abilities of high-capacity sequence models may be more closely tied to the coverage of their pretraining data mixtures than inductive biases that create fundamental generalization capabilities.
And then, in the following selected clips, continues his text:
Translated into the idiom from the fourteen points above, this translates into “It’s all interior intelligence, just within a very large boundary.” There is no boundary intelligence in current machine learning paradigms. There isn’t even an awareness of boundaries; just the ability to spout statements about doubt, unknowns, and boundaries of knowability; a bit like a blind person discussing color in the abstract.
This is not to say AI cannot acquire BI. In fact, it can do so in a very trivial way, through embodiment. Just add robots around current AIs and let them loose in real environments.
The reason people resist this conclusion is is irrational attachment to interior intelligence as a sacred cow (and among computer science supremacists, a reluctance to acknowledge the relevance and power of embodiment and situatedness in understandings of intelligence). If much of the effectual power of intelligence is attributable to boundary intelligence, there is much less room for sexy theories of interior intelligence. Your (cherished or feared) god-like AI is reduced to learning through FAFO (Fuck around and find out) feedback relationships with the rest of the universe, across its boundary, same as us sadsack meatbag intelligences with our paltry 4-GPU-grade interior intelligence.
In their current (undoubtedly very impressive) incarnation, what we have with AI is 100% II, 0% BI. Human and animal intelligences (and I suspect even plant intelligences, and definitely evolutionary process intelligence) are somewhere between 51-49 to 99.9-0.1% BI. They are dominated to varying degrees by boundary intelligence. Evolutionary processes are 100% BI, 0% II.