1) Robotic vessels are a very poor substitute for organic vessels. They lack emergent properties and would have a hard time competing thermodynamically. It’s more plausible that there’s some macro-embodiment where sensors around the world act as input / output. But that’s still pretty clunky.
2) Embodiment is a suggestion. Could be much more required to tap into consciousness.
3) Plenty of Evidence for a soul, and more than there is for consciousness being something that we can engineer. I can almost hear in your tone and emotion right now.
4) If consciousness does come about, what is the risk? Why would a more complex / more intelligent form of life be interested in eradicating other obviously complex & conscious life? That’s the supposition of a tyrant, and the projection of a weakling.
💬 Like any tool, the problem is with the people using them [...]. The real risk is its use as a tool by those who seek to control others.[...T]here’s no telling what sort of stupidity weak people with powerful tools will unleash.
Here ↑↑ come the money lines 🔥 The closing rubric in its entirety is quite the treat. Should I add this doesn’t mean the rest isn’t high-level illuminating? 🤨😊
As a zoomed-out aside, let’s not forget we are trying to parse this elusive Russian doll of intelligence–cognition–sentience–consciousness from within as it where, hence Gödel’s incompleteness applies. We are an inherent part of the very thing we endeavour to understand ¯\_(ツ)_/¯
--
*hissterical, as opposed to hersterical which is a very different beast altogether 😝
Apr 19, 2023·edited Apr 19, 2023Liked by Aleksandar Svetski
In my opinion, people inevitably bump into a — for lack of a better term — cognitive wall on this topic. As a civilization, we have certain intellectual preconditions that ground our beliefs and limit how far discourse, and even thought, can go.
For instance, there are really just two basic alternatives:
1. "Intelligence" is an emergent property. It developed somehow through our ancestors' brains; it emerges somehow from matter, in which case, it *is* possible to arrange matter in such a way as to create it again. Here, we may very well be in an evolutionary cycle that could end, for us, with our replacement.
2. "Intelligence" is fundamental: It has a wholly non-human origin, does not emerge from matter, and the intelligence that we possess is only a participation in something that pre-exists us.
The first is something we are primed to accept; it follows almost directly from the principles of the Enlightenment that we're taught as children, whether intentionally through school or through our culture by osmosis. It's a culturally acceptable proposition, and we can say "there is evidence for it" using the standads of evidence and judgement our intelligent minds devised — though if judgement itself emerges from matter, one wonders how we presume to make this judgement.
The second, however, creates a serious and immediate backlash wherever you mention it. It has an inescapably metaphysical feel. Literally, it's meta-physical. And metaphysics isn't culturally acceptable, in that we relegate it to being a "field of philosophy" alongside ethics and epistemology. These are self-contained "fields," with their own "specialists," in their own boxes, while that's an interesting hobby, it isn't "serious science" (though what is, and isn't, 'serious' is an epistemological issue).
It also brushes hard against theology, and that stirs up emotions. It's "personal"; there's baggage at an individual and national level. It's a "touchy" subject, even in our space with some of the highest IQ people in the world operating in it — they are still 'persons,' and if a person is more than their IQ, then their emotional state can affect their judgement. Or, maybe better said, there's more to IQ than A to B reasoning skills.
Option #1 is 'the' proposition that this civilization is psychologically (dogmatically?) committed to. If it's wrong, it'll end in the discovery of a very serious contradiction, just as you might fill up a whole page with mathematical calculations, come up with a nonsense answer, and realize you made a dumb mistake all the way back at the top of the page.
Great response, but I can't help but feel that intelligence and consciousness are two different things, and if not *different* then at least related but separated by some significant chasm.
Yes, they are different. Just connected enough that we can sloppily squeak by in everyday shorthand using them interchangeably. But we're past that point, so:
Owen Barfield — who was a member of the Inklings along with Tolkien and C.S. Lewis — devoted his life to the archaeology of words. He observed that our language conceals (fossilizes) what are, for us, dead metaphors, or metaphors we're so accustomed to hearing that we no longer see them as metaphors. But once, they were vibrant living word-paintings, disclosing the worldviews of the very intelligent but unknown people who first named them. So Barfield argued that, just as we can dig up ancient pottery in dirt, we can dig up ancient thoughts in words.
The word “con-scious,” con-sciens, is one of many dead metaphors he would say is actually a gem. It means co-knowledge, or co-knowing. It evokes duality and relationality between two things, each of which is a knower. It's a very suggestive and (to our contemporaries) even foreign view of awareness, of what it means for a personality to know itself and another simultaneously. Whereas we consider consciousness to be something wholly subjective, that “I and I alone” do, the word itself evokes a second subjectivity standing in parallel with my “I.”
That's a different idea from intelligence, but the two are closely connected. Intelligence in human beings seems to be a thing, or activity, that our consciousness does. Similar to the way music is a thing that an instrument plays. Consciousness in motion, perhaps, and intelligence moves at higher or lower modalities depending on the originating consciousness.
If I had to choose a phrase to describe the relation, I'd say “Distinct, but not separate.”
It seems implicit in your writing that you like option 2 better. Would you like to argue that case, or point at potential inconsistencies in the first option that you mention?
Excellent start. I think theories of consciousness and the brain can be classed into three categories: emissive, transmissive, and permissive. Emissive is emergence, basically the reductionist computational view. Transmissive is the more idealist view that consciousness is primary to matter, and the brain merely 'reads' it, like a television antennae. Permissive is a sort of intermediary: consciousness permeates matter, and is primary to it, and the brain serves to shape it in order to give it specificity and identity, with more complex neural structures enabling greater distinctness from the whole.
Regarding AI, I feel that even if transmissive or permissive models are superior, this doesn't rule out a sufficiently complex computational architecture from 'acquiring' consciousness. If anything it makes it perhaps more likely, as in such models everything already possesses it to some degree. It doesn't follow however that it would be a threat, or even especially powerful in comparison to human capabilities. We're a very long way from reaching the sophistication of a human brain, let alone a planet full of them.
Thanku for the framework. I’m going to dig into that. Anything u can send my way on it?
Also, in your second paragraph, u said even is transmissive/permissive as superior - does that imply the acquiring of consciousness is by emissive means for an Ai ? Just check to make sure I’m reading what your wrote right..
No, I meant superior as a model - meaning, closer to the truth.
Emissivity and transmissivity are pretty common, mapping to reductionist emergence and the soul hypothesis. The only place I've come across permissivity was in volume 2 of Iain McGilchrist's The Matter With Things. I found it to be a very interesting idea.
Look forward to the coming articles focusing more on the origin of consciousness and mind. Although, if the Supreme being, the creator of the universe is real then he is the only source of accurate revelation. So, those to whom he wills to reveal will know and to the rest it will always remain a parable.
Who's to say that embodiment is necessary? It's obvious why all natural intelligences are embodied - otherwise they would be useless in an evolutionary environment. But say we grant your point and embody AI in virtual, or robotic vessel. What then?
Consciousness is a great mystery, but it's hardly relevant to the question of whether our tools can grow complex enough to eclipse our ability to control. I do agree that we first of all should worry about AI as a tool that can be misused by our fellow humans, but this also doesn't give us grounds to dismiss AI risk.
Believe it or not, but I would like to find good arguments to dismiss AI doom cult, but struggle to do so, and you are certainly not doing very well so far. I think you are just out of your depth. Are you acquainted with main AI-doomer arguments, like Yudkowsky's (whom I despise) and other lessWrong/rat-sphere bloggers?
I would very much prefer to believe that I have a soul, but I have no evidence pointing in this direction, only wishful thinking. So far "meat robot" looks like the most plausible hypothesis. Would you like to argue otherwise? There's the miracle of consciousness, that science doesn't even know how to begin to explain; that's one of the few things that doesn't let me drown in nihilism completely. Though the jury is still out for me whether it's a miracle...or the greatest, most cruel curse for an intelligent being, if the materialistic framework that modern science reveals is basically correct.
Whether or not the soul exists, and almost equivalently whether or not consciousness is some kind of emergent illusion, is at least at this point in the development of our species an unanswerable question. Whenever such a question arises, rather than worrying about which answer is correct, it can be more enlightening to ask what the psychological or emotional consequences of a given answer are. If you assume one or the the other, how does that affect your approach to yourself, others, and the world?
Another thing worth pointing out is that the answer you default to is in large degree a matter of which brain hemisphere is answering the question at any given time. The right hemisphere will generally say, consciousness is real, and I am not a robot. The left hemisphere will say, I'm just a robot. This is because the RH thinks in terms of gestalt wholes, whereas the LH thinks in terms of parts. Point being that while the more 'spiritual' perspective is indeed something that happens 'in your head', the same is equally true for the mechanical perspective. Interestingly however, it's the RH that serves as the contact surface for reality; the LH rather works with simplified models, and is therefore much less accurate. Which suggests that on such questions the RH view is more likely true.
Another factor on meat robots. Life is not at all mechanical. Machines are built from parts, which can be disassembled and reassembled; this is impossible for organisms. Machines have off switches; organisms don't. Machines have very definite teleological purposes, they are built to serve a function; with organisms this is much less obviously the case (and no, 'reproduction' isn't the answer here). Machines operate according to a clear causal chain; with organisms the 'parts' are so densely interconnected, across all relevant scales from the quantum to the macroscopic, that it's more like a causal web.
Far from being mechanical, organic life seems much more akin to something like a river: a flow of matter and energy that emerges naturally in the world and imposes a temporary semi-stable pattern on the flow.
I'm kind of out of my field with this RH LH thing, but I don't see how this really matters here either way. Consciousness is very much real, many consider it the only thing that we can be sure is real. Intuitive, "spiritual" thinking non-withstanding, in figuring out how the world works we are relying on reason, at least it gives much better practical results. It seems to me that a view challenging materialism must be based on reason as well.
We could come up with all kind of changes that would bridge the gap between organic and machine - make machines self-reparable, or that could make more of themselves like von Neumann probes; and on the other side, I bet it's possible to bio-engineer life-forms to serve roles of machines. Indeed, it's plausible that the whole distinction will gradually melt away as we master both areas.
Well, the relevance of the LH vs RH hemisphere issue relates directly to whether or not one is inclined to see organisms, and for that matter the universe as a whole, as mechanical.
In principle, yes, one might envision mechanisms that possess some traits found in organisms. Roman concrete, for example, had limited self-repairing ability. Yet I suspect that a machine that possessed the full suite of organic traits would simply cease to be a machine.
Some non-human animals seem to have at least some degree of consciousness, as e.g. experiments with apes and mirrors suggest.
The process of evolution (which you seem to acknowledge when describing the opposing thumbs feedback loop) would make it plausible (or at least possible) that consciousness comes in degrees.
Why would consciousness not be subject to evolutionary forces?
1) Robotic vessels are a very poor substitute for organic vessels. They lack emergent properties and would have a hard time competing thermodynamically. It’s more plausible that there’s some macro-embodiment where sensors around the world act as input / output. But that’s still pretty clunky.
2) Embodiment is a suggestion. Could be much more required to tap into consciousness.
3) Plenty of Evidence for a soul, and more than there is for consciousness being something that we can engineer. I can almost hear in your tone and emotion right now.
4) If consciousness does come about, what is the risk? Why would a more complex / more intelligent form of life be interested in eradicating other obviously complex & conscious life? That’s the supposition of a tyrant, and the projection of a weakling.
I think you're responding to Eudemonic Warlock, but the comment isn't a direct response so he might not see it.
💬 Like any tool, the problem is with the people using them [...]. The real risk is its use as a tool by those who seek to control others.[...T]here’s no telling what sort of stupidity weak people with powerful tools will unleash.
Here ↑↑ come the money lines 🔥 The closing rubric in its entirety is quite the treat. Should I add this doesn’t mean the rest isn’t high-level illuminating? 🤨😊
As a zoomed-out aside, let’s not forget we are trying to parse this elusive Russian doll of intelligence–cognition–sentience–consciousness from within as it where, hence Gödel’s incompleteness applies. We are an inherent part of the very thing we endeavour to understand ¯\_(ツ)_/¯
--
*hissterical, as opposed to hersterical which is a very different beast altogether 😝
--
PS You’ll find much common ground with deeply Christian thinker @Kruptos --> apokekrummenain.substack.com/p/why-true-ai-will-never-happen 👌
Thankyou!
In my opinion, people inevitably bump into a — for lack of a better term — cognitive wall on this topic. As a civilization, we have certain intellectual preconditions that ground our beliefs and limit how far discourse, and even thought, can go.
For instance, there are really just two basic alternatives:
1. "Intelligence" is an emergent property. It developed somehow through our ancestors' brains; it emerges somehow from matter, in which case, it *is* possible to arrange matter in such a way as to create it again. Here, we may very well be in an evolutionary cycle that could end, for us, with our replacement.
2. "Intelligence" is fundamental: It has a wholly non-human origin, does not emerge from matter, and the intelligence that we possess is only a participation in something that pre-exists us.
The first is something we are primed to accept; it follows almost directly from the principles of the Enlightenment that we're taught as children, whether intentionally through school or through our culture by osmosis. It's a culturally acceptable proposition, and we can say "there is evidence for it" using the standads of evidence and judgement our intelligent minds devised — though if judgement itself emerges from matter, one wonders how we presume to make this judgement.
The second, however, creates a serious and immediate backlash wherever you mention it. It has an inescapably metaphysical feel. Literally, it's meta-physical. And metaphysics isn't culturally acceptable, in that we relegate it to being a "field of philosophy" alongside ethics and epistemology. These are self-contained "fields," with their own "specialists," in their own boxes, while that's an interesting hobby, it isn't "serious science" (though what is, and isn't, 'serious' is an epistemological issue).
It also brushes hard against theology, and that stirs up emotions. It's "personal"; there's baggage at an individual and national level. It's a "touchy" subject, even in our space with some of the highest IQ people in the world operating in it — they are still 'persons,' and if a person is more than their IQ, then their emotional state can affect their judgement. Or, maybe better said, there's more to IQ than A to B reasoning skills.
Option #1 is 'the' proposition that this civilization is psychologically (dogmatically?) committed to. If it's wrong, it'll end in the discovery of a very serious contradiction, just as you might fill up a whole page with mathematical calculations, come up with a nonsense answer, and realize you made a dumb mistake all the way back at the top of the page.
Great response, but I can't help but feel that intelligence and consciousness are two different things, and if not *different* then at least related but separated by some significant chasm.
How do you see the two?
Yes, they are different. Just connected enough that we can sloppily squeak by in everyday shorthand using them interchangeably. But we're past that point, so:
Owen Barfield — who was a member of the Inklings along with Tolkien and C.S. Lewis — devoted his life to the archaeology of words. He observed that our language conceals (fossilizes) what are, for us, dead metaphors, or metaphors we're so accustomed to hearing that we no longer see them as metaphors. But once, they were vibrant living word-paintings, disclosing the worldviews of the very intelligent but unknown people who first named them. So Barfield argued that, just as we can dig up ancient pottery in dirt, we can dig up ancient thoughts in words.
The word “con-scious,” con-sciens, is one of many dead metaphors he would say is actually a gem. It means co-knowledge, or co-knowing. It evokes duality and relationality between two things, each of which is a knower. It's a very suggestive and (to our contemporaries) even foreign view of awareness, of what it means for a personality to know itself and another simultaneously. Whereas we consider consciousness to be something wholly subjective, that “I and I alone” do, the word itself evokes a second subjectivity standing in parallel with my “I.”
That's a different idea from intelligence, but the two are closely connected. Intelligence in human beings seems to be a thing, or activity, that our consciousness does. Similar to the way music is a thing that an instrument plays. Consciousness in motion, perhaps, and intelligence moves at higher or lower modalities depending on the originating consciousness.
If I had to choose a phrase to describe the relation, I'd say “Distinct, but not separate.”
It seems implicit in your writing that you like option 2 better. Would you like to argue that case, or point at potential inconsistencies in the first option that you mention?
Excellent start. I think theories of consciousness and the brain can be classed into three categories: emissive, transmissive, and permissive. Emissive is emergence, basically the reductionist computational view. Transmissive is the more idealist view that consciousness is primary to matter, and the brain merely 'reads' it, like a television antennae. Permissive is a sort of intermediary: consciousness permeates matter, and is primary to it, and the brain serves to shape it in order to give it specificity and identity, with more complex neural structures enabling greater distinctness from the whole.
Regarding AI, I feel that even if transmissive or permissive models are superior, this doesn't rule out a sufficiently complex computational architecture from 'acquiring' consciousness. If anything it makes it perhaps more likely, as in such models everything already possesses it to some degree. It doesn't follow however that it would be a threat, or even especially powerful in comparison to human capabilities. We're a very long way from reaching the sophistication of a human brain, let alone a planet full of them.
Thanku for the framework. I’m going to dig into that. Anything u can send my way on it?
Also, in your second paragraph, u said even is transmissive/permissive as superior - does that imply the acquiring of consciousness is by emissive means for an Ai ? Just check to make sure I’m reading what your wrote right..
No, I meant superior as a model - meaning, closer to the truth.
Emissivity and transmissivity are pretty common, mapping to reductionist emergence and the soul hypothesis. The only place I've come across permissivity was in volume 2 of Iain McGilchrist's The Matter With Things. I found it to be a very interesting idea.
ok thankyou.
Have you read anything from Erich Neumann yet?
No, not yet.
Look forward to the coming articles focusing more on the origin of consciousness and mind. Although, if the Supreme being, the creator of the universe is real then he is the only source of accurate revelation. So, those to whom he wills to reveal will know and to the rest it will always remain a parable.
Who's to say that embodiment is necessary? It's obvious why all natural intelligences are embodied - otherwise they would be useless in an evolutionary environment. But say we grant your point and embody AI in virtual, or robotic vessel. What then?
Consciousness is a great mystery, but it's hardly relevant to the question of whether our tools can grow complex enough to eclipse our ability to control. I do agree that we first of all should worry about AI as a tool that can be misused by our fellow humans, but this also doesn't give us grounds to dismiss AI risk.
Believe it or not, but I would like to find good arguments to dismiss AI doom cult, but struggle to do so, and you are certainly not doing very well so far. I think you are just out of your depth. Are you acquainted with main AI-doomer arguments, like Yudkowsky's (whom I despise) and other lessWrong/rat-sphere bloggers?
I would very much prefer to believe that I have a soul, but I have no evidence pointing in this direction, only wishful thinking. So far "meat robot" looks like the most plausible hypothesis. Would you like to argue otherwise? There's the miracle of consciousness, that science doesn't even know how to begin to explain; that's one of the few things that doesn't let me drown in nihilism completely. Though the jury is still out for me whether it's a miracle...or the greatest, most cruel curse for an intelligent being, if the materialistic framework that modern science reveals is basically correct.
Whether or not the soul exists, and almost equivalently whether or not consciousness is some kind of emergent illusion, is at least at this point in the development of our species an unanswerable question. Whenever such a question arises, rather than worrying about which answer is correct, it can be more enlightening to ask what the psychological or emotional consequences of a given answer are. If you assume one or the the other, how does that affect your approach to yourself, others, and the world?
Another thing worth pointing out is that the answer you default to is in large degree a matter of which brain hemisphere is answering the question at any given time. The right hemisphere will generally say, consciousness is real, and I am not a robot. The left hemisphere will say, I'm just a robot. This is because the RH thinks in terms of gestalt wholes, whereas the LH thinks in terms of parts. Point being that while the more 'spiritual' perspective is indeed something that happens 'in your head', the same is equally true for the mechanical perspective. Interestingly however, it's the RH that serves as the contact surface for reality; the LH rather works with simplified models, and is therefore much less accurate. Which suggests that on such questions the RH view is more likely true.
Another factor on meat robots. Life is not at all mechanical. Machines are built from parts, which can be disassembled and reassembled; this is impossible for organisms. Machines have off switches; organisms don't. Machines have very definite teleological purposes, they are built to serve a function; with organisms this is much less obviously the case (and no, 'reproduction' isn't the answer here). Machines operate according to a clear causal chain; with organisms the 'parts' are so densely interconnected, across all relevant scales from the quantum to the macroscopic, that it's more like a causal web.
Far from being mechanical, organic life seems much more akin to something like a river: a flow of matter and energy that emerges naturally in the world and imposes a temporary semi-stable pattern on the flow.
*causal fractal 😊
That's good. Yes, exactly that.
I'm kind of out of my field with this RH LH thing, but I don't see how this really matters here either way. Consciousness is very much real, many consider it the only thing that we can be sure is real. Intuitive, "spiritual" thinking non-withstanding, in figuring out how the world works we are relying on reason, at least it gives much better practical results. It seems to me that a view challenging materialism must be based on reason as well.
We could come up with all kind of changes that would bridge the gap between organic and machine - make machines self-reparable, or that could make more of themselves like von Neumann probes; and on the other side, I bet it's possible to bio-engineer life-forms to serve roles of machines. Indeed, it's plausible that the whole distinction will gradually melt away as we master both areas.
Well, the relevance of the LH vs RH hemisphere issue relates directly to whether or not one is inclined to see organisms, and for that matter the universe as a whole, as mechanical.
In principle, yes, one might envision mechanisms that possess some traits found in organisms. Roman concrete, for example, had limited self-repairing ability. Yet I suspect that a machine that possessed the full suite of organic traits would simply cease to be a machine.
Some non-human animals seem to have at least some degree of consciousness, as e.g. experiments with apes and mirrors suggest.
The process of evolution (which you seem to acknowledge when describing the opposing thumbs feedback loop) would make it plausible (or at least possible) that consciousness comes in degrees.
Why would consciousness not be subject to evolutionary forces?