Categories
- Art (356)
- Other (3,632)
- Philosophy (2,814)
- Psychology (4,018)
- Society (1,010)
Recent Questions
- Why did everyone start to hate the Russians if the U.S. did the same thing in Afghanistan, Iraq?
- What needs to be corrected in the management of Russia first?
- Why did Blaise Pascal become a religious man at the end of his life?
- How do I know if a guy likes you?
- When they say "one generation", how many do they mean?
Artificial intelligence (AI) with turd copes with separating the shape from the background, if they are not contrasting. but for a person, this task is very simple. So far, one of the most effective and simple tests is ” find all the squares in the picture that have (dress, bag, cat, etc.), and do not include a contrasting background. (it is there and used)
Another task that AI is likely to fail is humor, figurative meaning, and irony. a metaphor. Question: how to put a question that a) would be unambiguous b) would imply a figurative interpretation ?
In general, it is difficult for AI to do everything that is not algorithmized. It will be difficult for him to distinguish works of art from chaos if he does not have a complete catalog of works of art in his arsenal. but here's the trouble-and people may not have a good taste for art. Here the problem is how to draw a border so that people, even with very modest artistic (musical) abilities, can determine whether a car is beautiful or not… I would have failed the test.
Another equally difficult but precise point is moral judgments. The dilemma of “Solomon's judgment”, or “Solomon's decision” is clear to every person with a mind not lower than normal, however…. The AI will need a lot of information/problem conditions/training to solve it. You can experiment in this direction.
now AI is built for practical tasks, and in some cases it can be calibrated and act as a very knowledgeable person, because in a particular application area, most of the issues have a clearly expressed dominant “correct” opinion.
the task of the test is not to determine the knowledge/ability to answer questions, but to separate a person from a trained robot. trite neural networks trained on ordinary chats often deceived specialists with a cursory superficial conversation 10 years ago, at that time it was a huge breakthrough,but by now the WHO is still there.
the main problem of AI is that it is trained for a field of knowledge, and a person (person) has self-awareness, a person has his own opinion, has a personal individual, separate experience, coupled with personal impressions and isolation of personal direct experience from knowledge gained from the outside.�
ai will easily throw any person to questions about knowledge, and AI will also find a suitable answer to questions about well-known paradoxes and dilemmas with a stretch in the knowledge base, which someone once already gave, and it will seem at least witty or worthy of attention, at least human(which is not surprising, because in such a situation it was given by some person). мет metaphors are more difficult, but not all of them and everyone understands them, and for some AI in the database can find a decent answer.
the AI pours in personal questions, about his self, about his ego, about his personality, i.e. about personal experiences, thoughts, complexes/fears, personal memories and dreams, about desires and needs. and here the AI has a clear dissociative disorder, when it will constantly be confused in its beliefs. and it is too difficult to pump AI with knowledge of only one person, because there are too many options for personal questions to clearly put them in the robot.
so the easiest way to identify a robot is to ask it to tell you a little about itself and ask a couple of clarifying questions. You will immediately see that the spy's legend is not completely worked out. for example, to the question how did you spend the summer, he can answer that he stayed at home all summer, and the next “and what kind of fish did you catch on the Volga this summer” he can answer “I didn't bite well, except that I caught one pike”
Questions about memories from the past, questions about feelings that the interlocutor experienced earlier, questions that contain complex metaphors that should be understandable to a living person.
Obviously, the question refers to AI in its maximum possible development, and not to the degree of perfection that it currently has. Today's AI is easily calculated by any simple “captcha” concerning the characteristics of its alleged personality.
Theoretically possible AI, with its extraordinary abilities, in the verbal-logical field will be invincible and will be able to “deceive” anyone. In fact, Wittgenstein indirectly pointed out exactly this in his LFT: in everything that is described in formal language, human abilities have a specific limit.
But it will always be possible to” bypass ” AI on emotional nuances, on a solid subtext, when one thing is formulated and another is completely implied, precisely because of the same invincibility in the formal-logical field. For any absolute advantage in its further development turns into an absolute weakness )
Therefore, thin trolls are the only ones who in brave new world will be able to outplay robots )