The fact is that we do not have any code in our consciousness, or rather even in the subconscious, because all operations are performed by the subconscious.
All the technical solutions in AI that a person has created are at the level of an amoeba, they do not reach either the chamber or the dog.
Self-consciousness is possible only with independent productive activity, the basis of which is consciousness, and consciousness is a collective product.
All AI technical devices are individual products, so what kind of collective activity can we talk about?
We are dealing with tools, albeit very complex ones, which cannot and will never be able to reproduce, because we do not need it, and therefore they will not need to pass on knowledge to future generations, all the knowledge that these devices “have” is a reflection of our knowledge, and not the experience gained individually.
AI devices will never be individuals, so they won't have consciousness, and if they don't have consciousness, then they won't have awareness.
Thus, this is not a code problem, but a problem of the concept as a whole, i.e. we do not develop individuals, but develop tools that can perform individual operations that are only performed (as it seems to us, i.e. we assume that this is how our brain works) in the image and likeness of our brain.
This is a problem of both code and code development and code management.
We solve this problem step-by-step and constantly, creating, for example, Linux. And AI is being updated as constantly as Wikipedia is being updated. And it is constantly developing, as they are developing … (please provide your IT examples)
And it is managed by an increasing number of constantly selected experts, just as the Cathedral of All Saints in the earth is replenished with holy people A group of brightened ones who are given superpowers of different levels.
Let's try to define “self-awareness”, although just consciousness is enough for me. So, let's assume that consciousness is what distinguishes humans from animals. What distinguishes humans from animals is the presence of a second signal system (or a language that almost completely reflects reality). That is, we have a reflection of reality in our head, which allows us to perform various experiments in our head with this reflected reality. This allows you to do a lot, at least plan your actions for a limited period of time (animals are no more than a day). So, people who do not have a second signal system (the so-called Mowgli), though smarter than animals, but alas…
From this we must understand that consciousness is a way of structuring reality in its entirety (yes, and animals have a vocabulary that rarely exceeds 170 words, but this is clearly not enough to describe the whole world). And this way of reflecting reality is unique for everyone (but the result is rarely unique, but that's something else), based on experience.
Let's just say that now the power of all computers is simply not enough to create consciousness. Of course, with a certain simplification of reality, we can get a certain surrogate of consciousness (in principle, at the level of an animal), which is now demonstrated by some machines. When too grossly simplifying reality (made consciously by people), when solving a number of specific tasks, machines are quite ahead of humans.
Well, about Skynet, this is generally nonsense, if you are talking about the (fictional) history of its origin. Especially about knowledge from the Internet (yes, there are so many opinions on any question, often directly opposite, that it's not even funny).
The concept of artificial intelligence is the clearest logical mistake of our time. It occurred due to a very rapid development in the field of computing systems, which was not supported by the corresponding development in the field of ontology. At some point, people thought that computers could solve all the problems of humanity. Many people began to give cars some special, supernatural meaning. Society has succumbed to its emotions, its fear. Just imagine the scale of the tragedy. Modern civilization has generated two meanings that contradict each other in all their ontological power:
Man, as the center of the world, is a strong-willed, omnipotent being in its development, without which the universe itself is nothing but a meaningless stream of elementary particles.
A person, as an integral element of the control system (Matrix, artificial intelligence, etc.), has lost (or initially did not have) the will of a being, the positive meaning of which remains in feeling, experiencing what is happening to him.
Answer yourself one question :can intelligence have no consciousness? Are you following an initially flawed idea? “AI self-awareness” is not a scientific problem. This is a social and psychological problem. This is a story about a deep civilizational crisis. About how a man who built a house with a saw and an axe once came up with an electric tool and, against the background of an unprecedented pace of construction, believed that this tool was smarter than himself.
We don't have any knowledge of AI. We don't have a single working sample. There are several rather different definitions. There are intelligent systems with AI elements. It is possible that in the current software space, with the tools available today, it is impossible to create a full-fledged AI at all. Due to the vagueness of definitions and perspectives, it is also impossible to give an unambiguous answer about the codes that may or may not be embedded in this system. The only thing you can be sure of is that we have millions of wise people who understand, without getting up from the couch, all the unsolvable questions of our time. They know how to run the state, how to treat the sick, how to protect themselves from the virus, and how to build a rocket and fly to Mars. Although they have not yet fully come to a common understanding of what AI is, they already know exactly how to modify it to launch independent development “like Skynet” (sarcasm).
The fact is that we do not have any code in our consciousness, or rather even in the subconscious, because all operations are performed by the subconscious.
All the technical solutions in AI that a person has created are at the level of an amoeba, they do not reach either the chamber or the dog.
Self-consciousness is possible only with independent productive activity, the basis of which is consciousness, and consciousness is a collective product.
All AI technical devices are individual products, so what kind of collective activity can we talk about?
We are dealing with tools, albeit very complex ones, which cannot and will never be able to reproduce, because we do not need it, and therefore they will not need to pass on knowledge to future generations, all the knowledge that these devices “have” is a reflection of our knowledge, and not the experience gained individually.
AI devices will never be individuals, so they won't have consciousness, and if they don't have consciousness, then they won't have awareness.
Thus, this is not a code problem, but a problem of the concept as a whole, i.e. we do not develop individuals, but develop tools that can perform individual operations that are only performed (as it seems to us, i.e. we assume that this is how our brain works) in the image and likeness of our brain.
This is a problem of both code and code development and code management.
We solve this problem step-by-step and constantly, creating, for example, Linux. And AI is being updated as constantly as Wikipedia is being updated. And it is constantly developing, as they are developing … (please provide your IT examples)
And it is managed by an increasing number of constantly selected experts, just as the Cathedral of All Saints in the earth is replenished with holy people A group of brightened ones who are given superpowers of different levels.
Let's try to define “self-awareness”, although just consciousness is enough for me. So, let's assume that consciousness is what distinguishes humans from animals. What distinguishes humans from animals is the presence of a second signal system (or a language that almost completely reflects reality). That is, we have a reflection of reality in our head, which allows us to perform various experiments in our head with this reflected reality. This allows you to do a lot, at least plan your actions for a limited period of time (animals are no more than a day). So, people who do not have a second signal system (the so-called Mowgli), though smarter than animals, but alas…
From this we must understand that consciousness is a way of structuring reality in its entirety (yes, and animals have a vocabulary that rarely exceeds 170 words, but this is clearly not enough to describe the whole world). And this way of reflecting reality is unique for everyone (but the result is rarely unique, but that's something else), based on experience.
Let's just say that now the power of all computers is simply not enough to create consciousness. Of course, with a certain simplification of reality, we can get a certain surrogate of consciousness (in principle, at the level of an animal), which is now demonstrated by some machines. When too grossly simplifying reality (made consciously by people), when solving a number of specific tasks, machines are quite ahead of humans.
Well, about Skynet, this is generally nonsense, if you are talking about the (fictional) history of its origin. Especially about knowledge from the Internet (yes, there are so many opinions on any question, often directly opposite, that it's not even funny).
The concept of artificial intelligence is the clearest logical mistake of our time. It occurred due to a very rapid development in the field of computing systems, which was not supported by the corresponding development in the field of ontology. At some point, people thought that computers could solve all the problems of humanity. Many people began to give cars some special, supernatural meaning. Society has succumbed to its emotions, its fear. Just imagine the scale of the tragedy. Modern civilization has generated two meanings that contradict each other in all their ontological power:
Answer yourself one question :can intelligence have no consciousness? Are you following an initially flawed idea? “AI self-awareness” is not a scientific problem. This is a social and psychological problem. This is a story about a deep civilizational crisis. About how a man who built a house with a saw and an axe once came up with an electric tool and, against the background of an unprecedented pace of construction, believed that this tool was smarter than himself.
We don't have any knowledge of AI. We don't have a single working sample. There are several rather different definitions. There are intelligent systems with AI elements. It is possible that in the current software space, with the tools available today, it is impossible to create a full-fledged AI at all. Due to the vagueness of definitions and perspectives, it is also impossible to give an unambiguous answer about the codes that may or may not be embedded in this system. The only thing you can be sure of is that we have millions of wise people who understand, without getting up from the couch, all the unsolvable questions of our time. They know how to run the state, how to treat the sick, how to protect themselves from the virus, and how to build a rocket and fly to Mars. Although they have not yet fully come to a common understanding of what AI is, they already know exactly how to modify it to launch independent development “like Skynet” (sarcasm).