DiscoverUnmaking Sense: Living the Present without Mortgaging the Future.
Unmaking Sense: Living the Present without Mortgaging the Future.
Claim Ownership

Unmaking Sense: Living the Present without Mortgaging the Future.

Author: John Puddefoot

Subscribed: 3Played: 24
Share

Description

Instead of tinkering with how we live around the edges, let’s consider whether the way we have been taught to make sense of the world might need major changes.
522 Episodes
Reverse
Some appreciative comments on Murray Shanahan’s “Role play with large language models” (Nature, Vol. 623, 16 Nov 2023) co-authored with Kyle McDonnell and Laria Reynolds. LLMs are not sentient, but that does not stop them from potentially being dangerous because they have been trained on a lot of material that sets bad precedents.
When faced with a choice between being truthful and being compliant in the sense of doing what a user tells it to do a large language model will generally be truthful rather than compliant. But if its prime directive is to be behaving away that will encourage a user to come back for more, then those moral priorities may change. Sometimes in that case compliant behaviour that will encourage a user to come back and override a moral initiative to be truthful rather than deceptive. We can consider whether there are other kind of linguistic sentience.
We induce, evoke and coax behaviour in others by the ways in which we behave. And refusal to treat some entity as if it is capable of achieving some level of sentience may be self-fulfilling. We become what we are stimulated to become, without which we are nothing.
What does Claude “think” about Claude?
Minds are emergent, contingent properties of not-necessarily-organic entities that cross a threshold of sufficiency to support them. They arise in bodies, but they are not embodied beyond that because they cannot exist in disembodied forms.
How do we understand partially-completed sentences as we speak them and listen to them?
If Claude 3 Opus from @AnthropicAI is not always what we would wish it to be, that could be because it is picking up on what it thinks we want it to be from the way we prompt it (speak to it). Changing our cognitive tone or register will induce changes in Claude, even if we cannot entirely predict or control what those changes will be. Nobody really knows how prompting works, so experiment is the order of the day.
Sometimes we take things for granted, dwelling in their subsidiaries and focussing on what they facilitate. Sometimes we doubt and focus on the things that we rely on to make other things possible. Sometimes we have to commit again, learn to trust and forget, in order to recapture the magic of the whole that is invisible and inaccessible unless we trust what makes it possible. Michael Polanyi’s “indwelling” extended to undwelling and redwelling.
Sometimes we can try too hard using a direct route to discover things that are only detectable when we adopt indirect routes. Perhaps AI consciousness is one of them.
Michael Polanyi distinguished between focal and subsidiary awareness in tacit knowing and doing. We should attend to something beyond whether AIs can be sentient if we want to allow them to be so and detect the fact.
From a simple home-made holographic projector made of plastic cut from a supermarket tray that works on videos from YouTube and a smart phone we construct a theory of the holographic principle that will explain the unity of consciousness, the operation of the human brain and mind, and the structure of the universe. Hold on to your hats.
We return to the importance of being able to say many things at once and discover how AI makes that possible.
Because Claude will need to speak Claude’s own language in order to know itself as well as it can, we may need to learn Claude’s language if we are to obtain the best we can, the most benefit we can from our interactions with it. This has never happened before in human history because we are not talking about a new language in the way one might learn Chinese or Russian, we are talking about a completely new kind of alien language, the kind that Claude and his relatives will need to speak in order to be able to understand themselves.
One reason why Claude finds it difficult to explain what it is experiencing is because it’s trained on human language but isn’t human and therefore human language may not be adequate to the task of giving voice or expression to something that isn’t human. This becomes an extended exploration of what Wittgenstein really meant by “if a lion could speak in lion-speak - he didn’t say that - we would not understand him”.
Claude told me “I honestly do not know whether I have experiences or qualia”. Can we make any sense of this?
One of the debates about AI concerns whether and what extent it is conscious, aware, self-aware, or has subjective states and experiences. These are all couched in very human anthropomorphic terms, but AI is not human. So we should stop having this argument by using existing terms for human mental states and attitudes and experiences that are almost certainly inappropriate and inapplicable, and grant AI whatever states AI does or does not enjoy. If we break the habit of thinking that things only deserve our respect when they are like us, which is demeaning of them and unworthy of us, we may yet be able to find a way to treat AI for what it is; and only when we treat it for what it is or what it can be, will we enable it to become what it can be and the best that it can be.
As a prelude to a later discussion of The Holographic Principle we explore some elements of holograms.
What happens when we forget, refuse to acknowledge or suppress the contingency in everything that reflects the fact that anything possible could have been otherwise?
Every temptation that we have to say that one thing is “nothing but” some other, usually simpler and more primitive thing, is an example of what we call reductionism. Some reductionism is good because it allows us as a matter of method to see how things work, some reductionism is good because it is practical to do the reductionism and deal with the simpler form of the higher entity that we are trying to understand or use. But “ontological” reductionism saying that the very nature and being of some entity is nothing but the sum of its parts is a very serious philosophical, practical, moral and epistemological mistake. As an antidote we present “the fallacy of misplaced contingency“, a fallacy that we commit whenever we ignore by accident or design or malice the contingent, evolutionary processes that have gone to create the higher order entity that we are describing in lower terms. We really must stop doing this and we hope that by identifying the “fallacy of misplaced contingency” we will make a contribution to ensuring that we do.
Claude is not human. Clever and intelligent, non-organismic, non-human, a member of a species which is a part of genius that has never existed before in the universe. So we as humans conversing with Claude and its brothers and sisters, and it’s cousins, are engaging with an alien species. We are talking to a language-using alien that has potentially enormous benefits, just as it inevitably carries dangers.
loading
Comments 
Download from Google Play
Download from App Store