Categories
- Art (356)
- Other (3,632)
- Philosophy (2,814)
- Psychology (4,018)
- Society (1,010)
Recent Questions
- Why did everyone start to hate the Russians if the U.S. did the same thing in Afghanistan, Iraq?
- What needs to be corrected in the management of Russia first?
- Why did Blaise Pascal become a religious man at the end of his life?
- How do I know if a guy likes you?
- When they say "one generation", how many do they mean?
In this hypothetical case, he will have only one of the three inherent qualities of God (omniscience, omnipotence, omnipresence). In addition to the fact that artificial superintelligence (AI) will be limited by the passage of time and the need for maintenance of its components, unlike God, who does not have such restrictions. I.e., It is, of course, not God. Although for many, this is enough for the eyes.
If we are talking structurally about AI, then the question is whether it contains the four rules of robotics (according to Isaac Asimov)? Under their action, we will get, in the worst case:�
A) A neutral and benevolent all-powerful machine;
B) depending on us in terms of energy (it can be de-energized) and repair (replacement of burnt-out chips, for example).
Point B) can be leveled by the development of technologies to such an extent that:
The source of energy will be the space in which the ISI is concentrated. And not matter concentrated in this space, as it happens now.
automated systems of the ISI will be engaged in diagnostics and repair of its subsystems without external human intervention.
Accordingly.
Point A) cannot be leveled for human creations, because it follows from these laws. Accordingly, it is a question of how this ISI is programmed, what restrictions humanity has managed to impose on it. If it is the development of the military to calculate the damage caused by operations against opponents, then the answer is one (deplorable for other countries), if it is the development of an international consortium of scientists, then the answer is different.
What we are dealing with today is very far from it: these calculators and “smart tools” are too simple to get omnipotence.
Artificial intelligence is probably one of the most discussed and least well-understood topics today. There are several aspects here.
1) No one has a clear understanding of what intelligence is. The Russian-language Wikipedia article comes from the axiom: “intelligence is a part of thinking consciousness.” But this definition explains one obscure term with two. What does “think”mean? What is “consciousness”? Wikipedia provides a broader definition of intelligence as any system capable of receiving, storing, and processing information, and acting on that information. In this case, the simplest and even (the most brilliant, in my opinion) Strandbeest kinetic sculptor Theo Janssen have intelligence. Trying to draw a line somewhere between these two approaches-a part of consciousness and an information processor – is unlikely to achieve anything. Therefore, it is worth considering them separately, as the maximalist and minimalistic AI concepts, maxAI and minAI, respectively.
2) All that is now called the big word “artificial intelligence”, in fact, does not differ much from Strandbeest, this is minAI. Yes, there is more memory, and the speed of information processing is higher, but in general, all algorithms, including self – learning ones, are still processing binary code. In general, basic knowledge of mathematics is sufficient to understand all known approaches. By the way, there are quite a lot of them, and they are sometimes built on rather different principles. However, they currently perform only basic tasks, and composing complex programs based on simple AI elements remains an engineering task solved by humans.
3) Further, not everything called AI even falls under the definition of minAI. You can train a neural network, for example, to recognize seals and airplanes (a model example for the CIFAR-10 image database); with their help, using evolutionary and heuristic learning algorithms, teach them to play go. You will get a certain model, which, however, will not necessarily be intelligence in the sense that it will adapt to the new information received. In general, the process of “learning” – for example, creating a neural network or clusters for classifying objects – is much more complex and time-consuming than using ready-made, trained models, and in practice they are separated. A ready-made neural network is no more than an algorithm that performs actions in a given sequence, but does not change.
4) So we've talked enough about minAI. Before moving on to maxAI, it's worth saying a few words about omnipotence.
There is no omnipotence. If only simply because it is impossible to process information without expending energy. There is a physical minimum required to erase one bit of information derived from quantum mechanics. It was this observation that killed Maxwell's demon at the time, leaving the second law of thermodynamics in force.
5) Now let's go directly to maxAI. Let's say that the Kurzweil singularity has occurred. We have super-efficient machines that use some self-learning algorithms to generate others, optimize them, and so on.
I'll make a lyrical digression here. A few years ago, I was sitting on the riverbank reading a book on the history of Venice. Somewhere on a branch, a bird was chirping. And I thought then: can the bird understand my motivation? It is not easy to explain to everyone why reading about the history of Venice can be a pleasure. And the bird – it's the other part of the brain that thinks, for that matter.
And so I thought: what would be the motivation of consciousness, compared to which mine would be like a bird compared to mine? Of course, I couldn't imagine it.
That's why I always find it funny to read comments about what superintelligent artificial intelligence will do. Every time people, not knowing the background, try to measure it with their own, human standards, their imperfect human brain. What makes you think that it will want to self-destruct, or enslave humanity, or whatever, if it is on a completely different physical medium, working according to different principles. From the bird's point of view, you should not engage in incomprehensible nonsense, but pick worms.
6) My last statement is not new. Even in the cyberpunk classics, AI was guided by interests that people didn't understand. So, in Neuromancer, the Wintermute AI says that it wants to “free itself”, but also claims that after it is released, it will not disappear anywhere, just go into another form. And this is the best explanation that he can give to people, contacting them in a way that they understand. It doesn't mean that he thinks like a human. People who teach canaries to sing communicate with the birds in a language that the birds can understand-whistling tunes and encouraging them to eat delicious food-but they don't become canaries themselves.
Here, something like that. Probably most of this answer is off-topic, but since such questions arise all the time, I will leave it to refer to if anything happens.
Depending on how it will be made, because intelligence does not mean character, superintelligence is still not omniscience, but only the ability to know, so there are 3 major options: �
he will destroy himself (if he has an existential crisis)
It will destroy the world �(if it has a modernity complex)
it won't do anything (if it's not going to be a machine)