I heard a poignant thing this morning. Listening to Ezra Klein discussing with his guest the weirdness of AI and the culture that produces it, I found a certain point of inspiration in something that he said.
Called into question was the construct of “intelligence.” Ezra mentioned a number of places in recent media where views were expressing pushback on the idea of intelligence, and whether rightfully this new technology could be called that.
Tuning into the subject of AI, as I have been recently, I learned something the other day about the way that this tech parses language in the process of constructing its “understanding,” and subsequent responses in the context of conversational chat. Specifically, each part of a word is represented as a “token,” and given statistical weights across an array of metrics having to do with its descriptive qualities.
Quite simply, when AI considers a phrase of input, such as “what is the nature, origin, and active source of wisdom?” like me, it interprets the phrase based on what it understands of the parts. From a purely mathematical perspective, the software generates a very explicit data set made up of say, roughly 14 tokens and 49 metrics each, so nearly 700 distinct data points arranged in a specific pattern. (It is of course doing substantially more than this, but that technical detail is both beyond the scope of my understanding and unnecessary to the point at hand.)
If our metaphor for “intelligence” reflects a simple definition; “the ability to acquire and apply knowledge and skills,” then I think we needn’t spend much time on the point of our socially constructed meanings for the word. Let us agree that the algorithm is in fact accomplishing that definition and therefore, let it be “intelligent,” in that sense of the word.
[There is a deep rabbit hole here I am tempted to explore around more complex contemplations regarding the nature of “intelligence,” partially stimulated by the response just now of ChatGPT 4 to the prompt of “define intelligence,” but I will let that lay for the time being.]
A more immediate and perhaps more prescient point was made by Ezra; that we as industrialized, dominant culture humans have elevated the construct of Intelligence to very particular (and perhaps peculiar) heights.
It is by our self-definition as “intelligent beings,” that we set ourselves apart from other living organisms. Claiming priority on what we simply decided to say, if implicitly (and never mind the WEIRDness,) is an empiric factor of righteousness, we use our subjective conception of intelligence to rule over, neglect, abuse, destroy, disenfranchise, disempower, and otherwise disregard countless forms of life, many of them human, “like us.”
In fact, as Otto Scharmer points out, we have even made this implicit priority of “intelligent construction” to enact the above disgraces on our very selves, leaving countless otherwise apparently empowered individuals feeling soul lost and at the effect of runaway systems.
These systems are consuming our hope and vitality, our relationships, and even the biosphere itself.
Much concern is made today of the dangers of runaway AI, and I would like to accomplish two things here. One of these is to point out that that concern, while practical in itself, might also be usefully considered as a reflective metaphor for other living processes already underway. The other consideration into which I hope you will feel invited, is simply the priority given to knowledge itself, and how that impacts our collective evolutionary journey.
Returning now to the wisdom and insight of Mr. Klein, he referenced specifically humanity’s industrial scale abuses with regards to animals used for food. If we are basing our “right” to keep chickens in such abysmal conditions worldwide on some idea of our “superior” intelligence, are we not setting ourselves up emotionally to anticipate similar abuses should we encounter or even create entities with intelligences that, within that construct, eclipse our own?
Are the machines in the Matrix simply doing our own moral bidding?
What is this priority that we place upon cognition, on the discrete acquisition of knowledge?
From a purely practical view, it is useful to a point of fact to know where to find food, to know when to avoid predation, and how to otherwise increase one’s likelihood of successful procreation. Life itself as we know it continues based on these very simple data centric algorithms. At least that’s how we conceive to understand it today.
In the proposed early days of our universe, cosmic inflation led to the formation of fundamental particles such as quarks, electrons, and anti-particles, which in turn became elements, and so on… While I can say this because we “know” it to be so based on our cognitively projected construct of knowledge, without the conventional “observer,” did these processes unfold based on knowledge?
Is knowledge itself a fundamental expression of a thing? What is knowledge? What does it mean to know? Is knowledge itself a fundamental particle of existence? Is data an abstraction, or a thing? In other words, is the map the territory?
Data, knowledge, intelligence; these constructions of meaning certainly seem to point to something vital in the living evolutionary process, and yet.
Once extracted from a timeless, boundless, transcendent unity beyond expression or description, all things exist in context. The thing, the context, the boundary that brings them into existence, all of these dance and shift and change, together. Indeed, they come into and go out of being together. A Buddhist view identifies this as the construct of no-thingness, emptiness, or shunyata.
From the time that life expressed itself as a slightly nervous organism that could, with some agility proactively escape predation, knowledge, or at the time perhaps proto-knowledge, came into its own as an evolutionary technology and utility, but does that make it empirically fundamental or eternal in any sense?
A number of years ago I stumbled across the following formulation. [See attached image, The Nature of Understanding.]
Considered directly with the rational faculty, the full set of knowledge, contextualized in natural “reality,” can be recognized to be not quite, but near infinitely irrelevant. Here we meet the juxtaposition of rationality and practicality; two terms so often implicitly wedded in our human disposition today.
If knowledge has been so useful (and it has) in the elaboration and complexified expression of living organisms on planet Earth over the past few billion years, let us make that the “thing,” and direct our attentions to the context of that thing, and the boundaries and how they continue to grow, change, and evolve together today.
There are many concerns with the dangers of AI. Raw and algorithmic data is exploding onto the scene in unpredictable and potentially volatile ways with simultaneous promise and existential threat. Is this unprecedented?
This priority that we place on rational knowledge, is it truly rational; is it truly knowledge? What is the nature of its utility and its value? What might context have to say to these questions?
I have long considered that both informal and formal social constructs are, themselves, well considered as “artificial intelligences.” They are artificial, in that they are neither comprehensively held within any single living organism at a given time, nor are they held entirely in common between any two organisms, probably ever. They are “intelligences,” in the sense that they enact a process of cumulative acquisition of knowledge and a subsequent application of skill to the environments that contain them.
Capitalism is one such relevant social construct, quite alive in the world today. I refer to the current dominant species of that expression as Algorithmic Capitalism for its likeness with modern conceptions of AI.
In theory, the broad supply chains of the world today, as well as much of the local, operate either directly under, or at the very least, often under, the influences of this algorithmic process of “the rational market,” and its “intelligent allocations.” (The knowledge acquisition of consumer desire, and the skilled application of resource allocation.) How’s that going?
In fact, so many of our stories of runaway AI converting everything to paperclips, or humanity itself to batteries, live not just in the fictions of pop media, but in fact in the realities of converted wildlands, and commodified natural interests. The ecologies of wilderness and society alike continue precipitous approaches to existential thresholds, while meanwhile our solutions and ways of addressing these threats continue to scramble for finger holds in the constructs of knowledge and cognitive intelligence.
Are there limits to the capacities of the rational mind? I would suggest there are no limits to the utility, but adequacy to every circumstance on the other hand, not so much.
To quote my namesake: “What is love? Kabir will tell you. Suppose you had to cut off your head and give it to someone else, what difference would that make?”
Does that sound rational? Radical, to be certain, and yet isn’t love itself profoundly fundamental?
We’ve already seen how the objects of rational inquiry amount, rationally and meaningfully, to a profound insignificance. The corollary of this suggests that we must search outside of and beyond this territory for the skills and intelligence necessary to transform along the course of well-being as we ourselves, our cosmic boundaries, and our evolutionary context flow ever onward towards the cracking of the universal egg.
In one way of looking, the age of reason allowed humanity to transcend and go beyond the pre-rational, often derogatorily called “magical thinking.” Certainly this has had its utility, though we have also overstepped our own “scientific process,” in declaring the pre-rational to be wholly depreciated and universally without value.
Similarly in this polarization of good and bad, right and wrong, rational and nonrational, we have let go of the trans-rational which has, itself, been a source of profound value for thousands of years at least. Cutting ourselves off from and devaluing an entire category of things “other than” rational, seems itself not so rational after all.
These early days of the rise of AI add but another element to the emerging poly crisis of the 20th and 21st centuries. The poly crisis gives rise to the meta-crisis which takes us beyond the concrete problems to identify the actual and fundamental necessity that we evolve our ways of perceiving, conceiving, and subsequent action in order to meet the challenge of the day.
In fact it has been our very way of being, a constructed assumption of a rules-based natural order whose pre-scripts and protocols account for all things should we but master them with our minds of intellect, that has risen, itself an artificial intelligence threatening to make all life but a battery, and a dying battery at that.
It has been said before that we must use our hearts, but this is not simply a platitude, nor should we limit our reach simply to love and compassion for all beings. We must open and plumb the depths of our humility, realizing the fragile and fleeting quality of all things born, organisms, ideas, and indeed even the spiritual.
I do not seek to hearken the end or the death of the rational. Rather I invite us to consider that perhaps, the rational faculty is only just now conceived, gestated, and newly born. The rational mind now turns its eye upon itself and reveals an infinite horizon beyond.Nascent Reason must now find its place, it's food, it's predators, it's mates, and its synergistic interdependence in a wild, wild ecosystem made up of infinitely more than just its little self.
In fact, AI even just as it stands today will change the face of our human presence on this planet, but it is not the mind of the computer with which we are compelled to contend to master the present and coming VUCA world. To rise to the challenges we find today, it is our own minds we must understand and perhaps go beyond.
Feel the beauty…
Plumbing the depths of Beauty, Life, & Art!