Categories
- Art (356)
- Other (3,632)
- Philosophy (2,814)
- Psychology (4,018)
- Society (1,010)
Recent Questions
- Why did everyone start to hate the Russians if the U.S. did the same thing in Afghanistan, Iraq?
- What needs to be corrected in the management of Russia first?
- Why did Blaise Pascal become a religious man at the end of his life?
- How do I know if a guy likes you?
- When they say "one generation", how many do they mean?
He will definitely be able to. If it can't do this, then it is not an intellect.
Comments on Mike Osincef's answer:
You can create an intelligence and run an accelerated simulation of a competitive environment for it, during which it will have the opportunity to acquire and test the necessary skills. They already do that.
The famous Alpha Go (a program created by Google), who recently beat the world champion in go (a game with a huge number of options, which is more difficult to program than chess), studied approximately as you described.
Go specialists didn't work on Alpha Go. The creators simply put the rules of the game into the program. This is one block of the program. The second block is a trained neural network. Initially, she doesn't know anything like a newborn baby. She was allowed to “watch” tens (or hundreds) of thousands of games of the best go players. As a result, the program learned to understand which moves are good and which are bad. After that, the program began to train its skills by playing with copies of itself. The versions that won more often were selected and played further. I don't remember how many parties she (they)played so played(s), tens of thousands or millions of games. The self-study only lasted a few days. All this time, she (they) played without stopping. After a few days, Alpha Go gained enough practice and, in the end, was able to beat the world champion.
So, there was a “difficult independent development path”, but it took several days.
This is not a problem. Limited (it is called “weak”) AI has long been able to “not want to die”. In computer strategy, the computer opponent will try their best to avoid “death”. The only difference is that the” world ” in which this AI lives is very simple. Otherwise, there is no significant difference. The AI will just need to understand the additional “rules of the game”. For example, that in order to continue the “game” (that is, his life), he must physically exist in the form of some kind of computer, have a constant presence of electricity, and so on. These are essentially the same rules as (in the game) “life goes on as long as the main headquarters is standing, if the staff's “health” reserve has reached zero, then life is over.”
Fatal depression is a mistake. This is obvious. The task of intelligence is to live (you can say “exist” if you don't like the term “live”).
It is likely that if you tell the AI that a person is alive, and the AI is a pile of metal, the AI will answer you :” And what is the difference?”. And here's the thing. You can't explain the difference to the AI if you really want to. I, for one, don't see the difference. A person can be called a “barrel of water”in response. “One day, a water barrel decided to talk to a pile of metal…”
Self-education is one of the signs of AI.
This means that if a program can self – educate, it will become artificial intelligence; if it can't, it won't.
Of course, in addition to self-education, she will need a lot more, but in my humble opinion, the process of self-education is absolutely necessary.
I once discussed on the pages of the site on this topic:
Artificial intelligence is an imaginary product of science fiction. From the category of “magic carpet” and “invisible hat” – it sounds beautiful and attractive, but impossible.
If we consider intelligence from a materialistic evolutionary point of view, then it is impossible to create intelligence – it must be born on its own. And not in one day, but to go through a difficult independent path of development. In nature, we find intelligence in highly organized animals with instincts and feelings. They developed intelligence primarily as a result of the struggle for life – the main motivation for all species. If you can create a silicon life form that doesn't want to die, I'll take my hat off to you. Otherwise, the machine will have no incentive not only to develop, but also to take care of anything.
There is another unsolvable problem. If we assume that artificial intelligence has been created, then what will happen to its self-consciousness? Will you tell him that he is a soulless pile of metal, or will you keep this terrible secret from him? If you don't tell me, then what's the point of this stupid toy that doesn't understand fundamental things? And if you say, you will cause fatal depression in the unfortunate person, which will lead to apathy: “I am not a person. what's the point of living like this? I will never have a girlfriend! …” Hopelessly disappointed, the AI will not strive for anything, or even hit all the hard ones.
🙂
=========
I want to add a recent story from the American media. They reported on a site where a “self-learning” bot was used for communication. It remembers the comments and the context in which they are used by users of the chat. Over time, the developers noticed that the bot started swearing obscenities and insulting blacks. The site was closed for prevention, recognizing the shortcomings in their brainchild.
My question about this is: why did the developers not add the “curiosity” function when the bot itself would be interested in whether it is possible to say so or not and why! This is how ordinary children behave. But bots DON'T! And even the developers did not draw the proper conclusion, did not teach them to be suspicious of unfamiliar phrases, but simply cleaned out all the negative experience. With this approach, AI will remain an automaton with a set of functions for a long time to come.
I think so. With the help of contacts with the Internet and live people, that is, exactly the same set of experiences will occur as the one that we receive during life.