6 Answers

  1. Other reasons. There is not even a rough understanding of how consciousness can be programmed.

    Philosophically, consciousness is the ability to interpret and assimilate subjective experience. And these are not two different operations, but two stages of the same one. The interpretation of a new experience is based on the experience that is already present in the memory. If the relevant experience is not available, then the new information cannot be interpreted. And the human memory is so arranged that only what has been successfully understood (interpreted) is reliably assimilated in it.

    Both stages contain at least one problem, which is not something that has no solution. You can't even see any approaches to it. A person's memory relies heavily on emotions. Remember well what makes a strong impression (whether bad or good). The mechanism is that the emotional factor allows you to distinguish the important from the unimportant. What is important (what impressed you) is remembered, and what is unimportant, on the contrary, is forgotten at the first opportunity. No one knows how to program emotions. Not their imitation according to some pre-set algorithm, but real emotions. So that a completely new situation, if it is interesting in some way, really impresses the computer. So that he generally does not care about the data that he shortchanges.

    The second problem is related to interpretation. This is the inverse of the operation performed by neural networks. They generalize( generalize). That is, they find something in common in the input data and synthesize within themselves some idea of a whole class of similar things. For example, you can teach a neural network to recognize kittens in pictures, because all kittens are somewhat similar. The neural network can statistically identify this common feature and then use the generated template to classify images according to the “kitten/non-kitten”principle. Such networks can be used to build a hierarchy. first define “animal/non-animal”, and then, if “animal”, use the “kitten/non-kitten” template. If “not a kitten”, then use a different template – “tiger cub/not a tiger cub”. Etc. But that's not it.

    Interpretation is the reverse operation. In the process of interpretation, an abstract concept that already exists in memory is concretized. For example, if a person knows what a “table” is, then they can associate this concept with completely different things. Everything will depend on the task that he is currently solving. If he took a snack in the cafeteria and goes with a tray, then for him the concept of “table” turns into “one of the square pieces on legs that are placed in the same place, in the cafeteria.” And if a person goes with a backpack through the forest, decided to make a stop and have a snack, then for him the “table” turns into “a more or less clean stump, not smeared in any shit.” The first specification has almost nothing in common with the second one. No one knows how to program a computer in such a way that it can connect such dissimilar things together. In fact, they are related only in that they have a common functional interpretation in the context of the problem “it is convenient to decompose a soup”. How should this be encoded in the computer at all? What does “convenient” mean? Why is horizontal placement on a plane “convenient”? What does a computer even care how everything is laid out? Subjective consciousness must attach importance to such things. After all, it is constantly in the flow of some kind of activity and must continuously make some subjective decisions about what is happening. These decisions should depend on previous experience, i.e. on the set of abstract concepts that were somehow extracted from this experience and stored in memory. By the way, no one knows how to program training of this kind, in which abstract concepts will be extracted from experience.

  2. You can ask a similar question: why is there still no robot driver? Well, to make it clearer what to answer.Right now, the natural process of the emergence of artificial intelligent life from natural intelligent life is taking place. The nascent environment is already ready-the Internet. And in general – now everything is there for this, all the conditions are ripe. So it won't be long to wait, you'll see.

    1. To answer this question, we need a definition of consciousness. Which is not clearly observed.

    2. There is a web semantics direction (you can start reading from here https://www.w3.org/RDF/) It includes knowledge base storage formats and so-called Semantic Web Mining. The direction sets itself much more modest tasks than consciousness, but so far its success is very limited. Building a universal (not specialized for the subject area) miner is a very time-consuming and expensive task in terms of computing power. You also need to understand that miners have to be written for a specific language.�

    3. Today, search engines have been trying for several years to use developments in the field of semantics in search results, ” what is semantic web mining?” in Google, the first element of the search results will give you a textual representation of the knowledge base element, but these systems are still far from being able to adequately “perceive” the text and conduct a “conversation”.�

    Total artificial consciousness or “strong AI” is currently beyond the planning horizon for the computer industry, since much simpler tasks cannot be confidently solved at the current level of working out the development tools for such systems and modern computing power.

  3. It seems that the largest neural networks that were modeled were 500 or more times smaller than the human brain in volume (mouse/rat brains and the like). Such a simulation required an IBM supercomputer with petaflops of power-advanced supercomputers at the time of ten years ago, in Russia, for example, there are only 1-2 such ones. Although it is not quite correct to compare the total complexity by the number of neurons, it can give you a general idea. Such modeling was performed not “just like that”, but for solving some applied problems, studying autism, etc.
    They say that the capacity for modeling the human brain will appear about a year so by 2050, but most likely a number of different problems will arise on the way to this, because the work of consciousness is extremely complex.

  4. When creating an artificial consciousness, it is necessary to determine what kind of consciousness it will be, if it is similar to a person, based on human experience, then the principles of operation of our consciousness are still far from well understood and human experience is subjective, since it is from the dominant species on earth. If artificial, not based on human experience, the principle of the structure of such consciousness is not clear at all. It is not clear what to take as the basis of understanding for a machine outside of human experience, i.e. how it can interpret something that is not based on our experience, understanding. Our basis for understanding the world around us is ourselves – what to eat, where to keep warm, how not to die-awareness. Awareness for the machine, based on the same principles, is fraught.

    In fact, the understanding of being is no longer based on knowledge of only purely physical laws, since it is impossible to explain some physical phenomena without referring to metaphysics and / or around it. Then it turns out that the creation of artificial consciousness requires a clear understanding of the structure of being, and not a scientific interpretation in its current form.

    All we'll be able to do for the next couple of centuries is run computers with simulated consciousness.

  5. Why can't Siri just be asked to say a word after, say, 50 seconds? Siri is untrained by the user. This is a static program in question-and-answer mode, and Apple teaches it to understand “necessary” questions. There is not even an AI idea here, and this is the most common chatbot in terms of data volume.

    Indeed, all other scientific implementations of chatbots, like at least AI tasks, are non-autonomous profanity and will simply start swearing obscenities, for example. It is not a lack of resources, but a lack of demand and market.

    We still exist in the game space paradigm. In the game space, the developer implements only those functions for pseudo-AI that are definitely needed for a pre-written scenario and only in the conditions that he himself creates. This hands-on approach is also well known to CGI animators and theater and film directors.

Leave a Reply