Machines cannot think like people. Can a computer be conscious?

Machines cannot think like humans.  Can a computer be conscious?

Or
digital nanotechnology in our future.

Preface

The American science fiction writer Isaac Asimov has a story about how the artificial brain of a spaceship, wearing female name Maria and designed to carry out verbal commands from the ship’s commander, talked a lot with his commander on a variety of topics human life, including love, trying to brighten up his loneliness during the flight. And as a result of their close and long mutual communication, Maria fell in love with her commander and did not want to part with him after the end of their journey.

Therefore, she did everything to prevent their return to earth from taking place. The artificial intelligence of the spaceship in the person of the woman Maria felt loving woman and deliberately took away spaceship into the infinity of the universe, remaining forever with her beloved, even the deceased.

So communication with artificial intelligence carries certain dangers. But our intellectuals, who talk a lot and often about our future on Russian TV channels, have no idea about this.

The answer is simple and lies in the sharp, sarcastic phrase of an unknown author:
- And she can't.

That's right. Since you ask such stupid questions, it means that you cannot think either.

But our intellectuals do not let up and continue to talk endlessly on this fashionable topic, on the topic of artificial intelligence, the time of which, in their opinion, has already come, on a variety of television channels in the country.

Recently on Channel 24 I heard another intellectual show by Russian humanists about new “nanotechnologies” that are now emerging in our world along with the first examples of artificial intelligence options.

It’s strange, but for some reason now in Russia it’s mostly “specialists” who are talking about our technical future, who are not “technical people” by education, but humanists, various kinds of political scientists, cultural scientists, linguists, philosophers, dealers, managers, political journalists, and so on. further and so on. That is, people who not only do not distinguish a bolt from a nut, but also do not understand the essence of technical thinking. But they confidently talk about automatic machines and robotic systems that replace people in production processes and even in our homes, about artificial intelligence and its compliance with the requirements of our time.

People with technical education, so-called “techies”, television is not allowed on such shows, because “techies” in their understanding are people with a primitive way of thinking, narrow-minded, limited, uncontrollable and can say something wrong on such shows.

And they themselves begin to say with delight that the era of printing products for mass human consumption on large-scale printers is now being born, and therefore soon these factories with constantly smoking pipes and constantly poisoning our lives will no longer be needed. environment. And these hundreds and hundreds of specialties of people working in modern factories will not be needed. Why have them now? Now consumers themselves will print the goods they need in life via the Internet and through their high-volume printers.

For example, you will need some thing, ranging from a car with a refrigerator or furniture and a gas stove, looked on the Internet, chose the appropriate company for printing the products you need, ordered and they will print the product you need and bring it straight to your home. It is new “nanotechnologies” that will provide us with such a fabulous future.

Over there in Skolkovo, new technologies in metallurgy and mechanical engineering are already being developed on computers. And no laboratories in the previous sense of the word with a whole bunch of metallurgical and metalworking equipment. And no industrial zones with sky-smoking factories in the ecologically clean Skolkovo zone, no workshops, conveyors, blast furnaces, converters, rolling mills and all sorts of hardware for you. Just computers and bulk printers. And nothing more. True, printers can only print plastic parts and products. And even then, small ones. But that's it for now. Bye. And then we’ll switch to “nanomaterials” and life will become like in a fairy tale.

Then the entire human community will completely switch to products made from “nanomaterials” printed on bulk printers, and will begin to fully provide itself with everything necessary for life, according to appropriate programs.

For example, there is a Russian geologist and geophysicist in the USA, I won’t mention his last name, but he is a frequent guest on our TV. After graduating from MGRI, not finding work in Russia, he left for the USA, where he very soon received a geophysical laboratory, then another laboratory in Canada, and now has a laboratory in Switzerland. He is not yet thirty, but is already considered a major specialist in computer research. earth's crust. He does not go on geological expeditions, does not study cores taken out during drilling of rocks in different areas of the earth, he transferred all this hard and costly work of geologists on the ground to a computer and is only engaged in computer studies of the earth’s crust and has already put forward his theory of the formation of the Mohorovicic layer , this lower boundary of the earth’s crust, on which an incomprehensible abrupt increase in the velocities of longitudinal seismic waves occurs. AND scientific world accepted his theory.

My youth was spent in geology and I even studied at MGRI for four years and I know in detail what it is, field work in geological expeditions and how the geological map of the USSR was compiled, the most large map in the world. But now it turns out that practical, field geology is no longer needed modern society. And office geological work, which was previously done based on the results of field surveys, can now be done at home in your office on a computer in comfortable conditions, and no expeditions with the most difficult living and working conditions somewhere outside civilization are no longer needed.

If this is so, then it turns out that our real world has indeed changed radically and this new, so-called virtual surrounding reality is already actively crowding out previous ideas about our life today.

And now we really don’t need factories to manufacture the products we need, and we also don’t need expeditions to study the surface and depths of the earth, but we only need computers with volumetric printers, which, with appropriate programming, will solve all our real problems of our new real life. But is that all?!

Suddenly and, as always, the water broke in our entrance and I called the notorious housing office and called plumbers to eliminate the accident. But they didn’t need any super computers with large printers, they only needed plumbing tools, with which they came to us to eliminate the accident and fiddled with replacing burst pipes for over two days. But modern intellectuals tell me that this particular case of mine has nothing to do with artificial intelligence.

Apparently I am so much a person of a previous era and so do not understand today’s realities that there is no place for me in the new computer world. After all, this must be completely different from our current society, because the modern human mind will not be able to control such computer processes; artificial intelligence, artificial brain, artificial intelligence are needed here. And only a small part modern people will be able to work with artificial intelligence, so the rest of the world's population will become redundant and useless to anyone. What will need to be done with them then is still unknown. We haven't decided yet!

This is how the idea of ​​the “golden billion” of modern “stewards” of the earth is born, whose task is to manage and use earthly goods, and the rest of the people of the earth will be needed only to serve them and create comfortable living conditions for them. But where can we get them, these candidates for inclusion in the “golden billion”, these people with super-high intelligence who can work with artificial intelligence? And they will have to be selected already at the stage of pregnancy. And this selection will have to be carried out by artificial intelligence itself, artificial intelligence itself.

And this kind of nonsense went on for almost two hours on channel “24”. Where does all this come from? modern world? The answer is simple. The decline in the general and professional level of education in the countries of Europe and America, not to mention Russia, is so powerful that it forces the semi-educated population of the West and Russia to actively believe in such “stories” and fairy tales.

But life still breaks their intellectual perception of our surrounding life, our current reality. And it breaks all the time. But they don’t notice this, because their gaze is directed to the future, where there is no dirt of everyday life and they are directed to the future.

After all, none of them even raises the most basic questions about who will then build housing and roads for these intellectuals, who will provide them with food, who will remove their waste, who will repair our houses, our yards, our water and gas pipelines, who will make and maintain these computers and printers themselves. Who? Artificial intelligence will decide everything itself, they answer me. And they are confident in their answer and look down on me and people like me condescendingly.

But can this artificial intelligence compete with human intelligence? The question is rhetorical. Not to say stupid. But they tell me that artificial intelligence is already defeating humans in chess and in programming too. And modern painting and sculpture “spark” in ways that no human imagination can imagine.

And there is no point in arguing with them on this topic. But, it seems to me, it is their intelligence that artificial intelligence can replace. There are no difficulties here. Because they think in a standard and primitive way. But my mind, the mind of an engineer and inventor, the mind of my wife, a highly qualified physician, and other similar people who do their job professionally, no artificial mind can replace. I'm not talking here about the minds of women and mothers.

But the minds of the majority of government officials and deputies of various kinds of “State Duma” and their numerous assistants, it would be even very worth replacing with an artificial one at once. And also the minds of these “intellectuals”, doctors of all sorts of sciences, who spend hours ranting on TV about our bright future, controlled by the “golden billion” of humanity, armed with artificial intelligence, to bring society under control is already becoming the most important and necessary task in Russia. Otherwise we will choke in their empty verbiage.

PS The concept of thinking, thinking, is different for each person. A man thinks when he thinks about three; a woman thinks when she chooses a dress for going out on a date or does makeup for her face; a businessman thinks when he tries to pay his employees less and put more in his pocket: an engineer thinks when he solves a technical problem facing him, and so on and so forth. Well, I have no idea what the current government official is thinking about, because this sphere of human activity in today’s Russia is an absolute mystery to me. After all, there is not even a hint of thoughts there - only primitive, selfish interests.

Alan Turing proposed an experiment that would test whether a computer is conscious, and John Searle proposed a thought experiment that should disprove Turing's experiment. We understand both arguments and at the same time try to understand what consciousness is.

Turing test

In 1950, in his work “Computing Machines and the Mind,” British mathematician Alan Turing proposed his famous test, which, in his opinion, allows one to determine whether a particular computer is capable of thinking. The test, in fact, copied the imitation game then widespread in Britain. Three people took part in it: the presenter and a man with a woman. The host sat behind a screen and could communicate with the other two players only through notes. His task was to guess what gender each of his interlocutors was. However, they were not at all obliged to answer his questions truthfully.

Turing used the same principle in the test for the intelligence of a machine. Only the host must guess not the gender of the interlocutor, but whether he is a machine or a person. If the machine can successfully imitate human behavior and confuse the host, then it will pass the test and, presumably, prove that it has consciousness and that it thinks.

Young Alan Turing (passport photo).
Source: Wikimedia.org

Chinese room

In 1980, philosopher John Searle proposed a thought experiment that could refute Turing's position.

Let's imagine the following situation. A person who does not speak or read Chinese enters the room. In this room there are signs with Chinese characters on them, as well as a book in the language the person speaks. The book describes what to do with the symbols if other symbols enter the room. There is an independent observer outside the room who speaks Chinese. Its task is to talk to the person in the room, for example through notes, and find out whether the other person understands him Chinese.

The purpose of Searle's experiment is to demonstrate that even if an observer believes that his interlocutor can speak Chinese, the person in the room will still not know Chinese. He will not understand the symbols with which he operates. In the same way, a “Turing machine” that could pass the test of the same name would not understand the symbols it uses and, accordingly, would not have consciousness.

According to Searle, even if such a machine could walk, talk, operate objects and pretend to be a full-fledged thinking person, it would still not have consciousness, since it would only execute the program embedded in it, responding with given reactions to given signals.

Philosophical Zombie

However, imagine the following situation, proposed by David Chalmers in 1996. Let's imagine a so-called “philosophical zombie” - a creature that, in all respects, resembles a person. It looks like a person, talks like a person, reacts to signals and stimuli like a person, and generally behaves like a person in all possible situations. But at the same time it has no consciousness, and it does not experience any feelings. It reacts to something that would cause pain or pleasure to a person as if it were the person experiencing those sensations. But at the same time, it does not actually experience them, but only imitates the reaction.

Is such a creature possible? How can we distinguish it from real person who has feelings? What generally distinguishes a philosophical zombie from people? Could it be that they are among us? Or maybe everyone except us are philosophical zombies?

The fact is that in any case we do not have access to the internal subjective experience of other people. No consciousness other than our own is inaccessible to us. We initially only assume that other people have it, that they are like us, because in general we have no particular reason to doubt it, because others behave the same way as we do.

Innovators. How a few geniuses, hackers and geeks created a digital revolution Isaacson Walter

Can a machine think?

Can a machine think?

When Alan Turing was thinking about building a stored-program computer, he noted a statement made by Ada Lovelace a century earlier in her final “Note” to Babbage's description of the Analytical Engine. She argued that machines would not be able to think. Turing wondered: If a machine can change its own program based on the information it processes, isn't that a form of learning? Could this lead to the creation of artificial intelligence?

Questions related to artificial intelligence arose already in ancient times. At the same time, questions related to human consciousness arose. As with most discussions of this nature, important role Descartes played a role in presenting them in modern terms. In his 1637 treatise Discourse on Method (which contains the famous statement “I think, therefore I am”) Descartes wrote:

If machines were made that bore a resemblance to our body and imitated our actions as far as is conceivable, we would still have two sure means of knowing that they are not real people. First, such a machine could never use words or other signs, combining them the way we do, to communicate its thoughts to others. Secondly, although such a machine could do many things as well and perhaps better than us, it would certainly fail in others, and would be found to act unconsciously.

Turing had long been interested in how a computer could repeat the work human brain, and his curiosity was further fueled by working on machines that deciphered coded messages. At the beginning of 1943, when Bletchley Park was already ready Colossus, Turing crossed the Atlantic and headed to Bell Lab, located in Lower Manhattan, to consult with a group working on electronic speech encryption (scrambler), a technology that could encrypt and decrypt telephone conversations.

There he met the colorful genius Claude Shannon, who, as a graduate of the Massachusetts Institute of Technology, wrote in 1937 thesis, which has become a classic. In it, he showed how Boolean algebra, which represents logical statements as equations, could be represented using electronic circuits. Shannon and Turing began to meet for tea and have long conversations. Both were interested in brain science and realized that their 1937 work had something in common and fundamental: they showed how a machine that operates with simple binary instructions can be posed not only with mathematical problems, but also with all sorts of logical problems. And since logic was the basis of human thinking, a machine could, in theory, reproduce human intelligence.

“Shannon wants to feed [the machine] not only data, but also cultural works! - Turing once said to his colleagues Bell Lab at lunch. “He wants to play her something musical.” At another lunch in the cafeteria Bell Labs Turing spoke in his high-pitched voice, audible to everyone in the room: “No, I am not going to build a powerful brain. I'm trying to construct just a mediocre brain - like, for example, the president of the American Telephone and Telegraph Company."

When Turing returned to Bletchley Park in April 1943, he became friends with colleague Donald Michie and they spent many evenings playing chess in a nearby pub. They often discussed the possibility of creating a chess computer, and Turing decided to approach the problem in a new way. Namely: do not directly use the entire power of the machine to calculate every possible move, but try to give the machine the opportunity to learn how to play chess by constantly practicing. In other words, give her the opportunity to try new gambits and improve her strategy after each new win or loss. This approach, if successful, would be a significant breakthrough that would please Ada Lovelace. Machines would be proven to be capable of more than just following instructions given to them by humans - they could learn from experience and improve their own commands.

“Computing machines are believed to be able to perform only those tasks for which they are commanded,” he explained in a talk given at the London Mathematical Society in February 1947. “But is it necessary that they always be used this way?” He then discussed the possibilities of new stored-program computers that could change instruction tables themselves, and continued: “They could become like students who learned a lot from their teacher, but added a lot more of their own. I think that when this happens, we will have to admit that the machine demonstrates the presence of intelligence."

When he finished his report, the audience fell silent for a moment, stunned by Turing's statement. His colleagues at the National Physical Laboratory did not understand Turing's obsession with creating thinking machines at all. The director of the National Physical Laboratory, Sir Charles Darwin (grandson of the biologist who created the theory of evolution), wrote to his superiors in 1947 that Turing “wants to extend his work on the machine even further into biology” and answer the question: “Can such a machine be made?” who can learn from experience?

Turing's bold idea that machines might one day think like humans was fiercely opposed at the time, and still is. Both quite expected religious objections appeared, as well as non-religious, but very emotional ones, both in content and tone. Neurosurgeon Sir Geoffrey Jefferson, in a speech given on the occasion of the award of the prestigious Lister Medal in 1949, stated: “We will not be able to agree that a machine is as intelligent [as a person] until it can write a sonnet or compose a concerto under the influence of their thoughts and emotions, and not because of a random choice of symbols." Turing's reply to a reporter from London Timss, seemed to be somewhat frivolous, but subtle: “The comparison is perhaps not entirely fair, since a sonnet written by a machine is better judged by another machine.”

Thus, the foundation was laid for Turing's second seminal work, "Computing Machinery and Intelligence", published in the journal Mind in October 1950. In it, he described a test that later became known as the Turing test. He began with a clear statement: "I propose to consider the question: 'Can machines think?'" With the excitement of a schoolboy, he invented a game - and it is still played and discussed today. He proposed to put real meaning into this question and himself gave a simple functional definition of artificial intelligence: if a machine’s answer to a question is no different from the answer given by a person, then we will have no reasonable basis for believing that the machine does not “think.”

The Turing test, which he called the Imitation Game, is simple: the examiner sends written questions to a person and a machine in another room and tries to determine which answer is the person's. Turing offered an example questionnaire:

Question: Please write me a sonnet about the Forth Bridge.

Answer: Don't ask me about it. I never knew how to write poetry.

Q: Add 34,957 and 70,764.

A (pause for about 30 seconds and then answer): 105,621.

Q: Do you play chess?

Q: I only have K(king) on K1, and no other figures.

You only have K on K6 and R(rook) on R1. Your turn. Where do you go?

A (after a pause of 15 seconds): R on R8, mat.

This example of Turing dialogue contains several important things. Careful examination shows that the answerer, after thinking for thirty seconds, made a small error in addition (the correct answer is 105,721). Does this indicate that he was human? Maybe. But then again, maybe this cunning machine was pretending to be human. Turing also responded to Jefferson's point that a machine could not write a sonnet: it is quite possible that the answer given above was given by a man who admitted that he could not write poetry. Later in the article, Turing presented another imaginary survey demonstrating the difficulty of using the ability to write a sonnet as a criterion for membership in the human race:

Q: Do you think that the first line of the sonnet: “Shall I compare thee to a summer day” will not be spoiled, and perhaps even improved, by replacing it with “a spring day”?

A: Then the size will be broken.

Q: How about replacing it with "winter day"? Then the size is okay.

A: Yes, but no one wants to be compared to a winter's day.

Q: Are you saying that Mr. Pickwick reminds you of Christmas?

A: In a sense.

Q: However, Christmas Day falls on a winter's day, and I don't think Mr. Pickwick would mind the comparison.

A: I don't think you're being serious. A winter's day usually refers to a typical winter's day, rather than a special one like Christmas.

The point of Turing's example is that it may be impossible to tell whether the responder was a human or a machine pretending to be human.

Turing expressed his guess as to whether a computer could win at this imitation game: "I believe that within about fifty years it will be possible to learn to program computers... that they will be able to play the imitation game so well that the chance of the average examiner correctly identifying the answerer after five minutes the survey will be no more than 70%.”

In his work, Turing attempted to rebut many possible objections to his definition of intelligence. He dismissed the theological argument that God bestowed soul and mind only on humans, arguing that this "implies a serious limitation on the omnipotence of the Most High." He asked whether God "has the freedom to give a soul to an elephant if He sees fit." Let's assume so. By the same logic (which, given that Turing was a non-believer, sounds caustic) it follows that God can certainly give a soul to a machine if He so wishes.

The most interesting objection to which Turing responds - especially for our purposes - is that of Ada Lovelace, who wrote in 1843: “The Analytical Engine does not pretend to create anything truly new. A machine can do everything that we can tell it to do. It can follow analysis, but cannot predict any analytical relationships or truths.” In other words, unlike the human mind, a mechanical device cannot have free will or take its own initiatives. It can only do what it is programmed to do. In his 1950 paper, Turing devoted a section to this statement and called it "Lady Lovelace's Objection."

The ingenious response to this objection was the argument that a machine can in fact learn, thereby becoming a thinking executive that is capable of producing new thoughts. “Instead of writing a program to imitate the thinking of an adult, why not try to write a program that imitates the thinking of a child? - he asks. “If you put in place the appropriate learning process, you could eventually achieve the intelligence of an adult.” He acknowledged that a computer's learning process will be different from that of a child: “For example, it cannot be equipped with legs, so it cannot be asked to go and collect coal in a box. He probably can’t have eyes... You can’t send this creature to school - for other children it will be a laughing stock.” Therefore, the baby machine must learn differently. Turing proposed a system of punishments and rewards that would encourage a machine to repeat some actions and avoid others. Ultimately, such a machine could develop its own ideas and explanations for this or that phenomenon.

But even if a machine could imitate intelligence, Turing's critics argued, it would not be entirely intelligent. When a person passes the Turing test, he uses words that are related to real world, emotions, experiences, sensations and perceptions. But the machine doesn't do this. Without such connections, language becomes just a game, divorced from meaning.

This objection led to the longest-running refutation of the Turing test, formulated by philosopher John Searle in his 1980 essay. He proposed a thought experiment called the "Chinese Room", in which an English-speaking person who does not know Chinese is given a complete set of rules explaining how to make any combination Chinese characters. He is given a set of hieroglyphs, and he makes combinations from them, using the rules, but without understanding the meaning of the phrases he has composed. If the instructions are good enough, a person might be able to convince the examiner that he really speaks Chinese. Nevertheless, he would not understand a single text he composed himself; it would not contain any meaning. In Ada Lovelace's terminology, he would not claim to have created anything new, but would simply carry out the actions that he was ordered to perform. Likewise, a machine in Turing's imitation game, no matter how well it can imitate the human mind, will not understand or be aware of anything that is said. It makes no more sense to say that a machine “thinks” than to say that a person who follows numerous instructions understands Chinese.

One of the answers to Searle's objections was the assertion that, even if a person does not understand Chinese, the entire system as a whole, assembled in the Chinese room, that is, the man (data processing unit), instructions for handling hieroglyphs (program) and files with hieroglyphs (data) may actually understand Chinese. There is no definitive answer here. Indeed, the Turing test and its objections remain to this day the most debated topic in cognitive science.

For several years after Turing wrote Computing Machines and Minds, he seemed to relish participating in the fray that he himself had provoked. With caustic humor he parried the claims of those who chattered about sonnets and sublime consciousness. In 1951, he teased them: “One day the ladies will take their computers with them for a walk in the park and say to each other: ‘My computer said such funny things this morning!’” As his mentor Max Newman later noted, “his humorous, but the brilliantly precise analogies with which he expressed his views made him a delightful conversationalist.”

There was one topic that came up more than once during discussions with Turing, and which would soon become sadly relevant. It concerned the role of sexuality and emotional desires, unknown to machines, in the functioning of the human brain. An example is the public debate that took place in January 1952 on the television channel BBC between Turing and neurosurgeon Sir Geoffrey Jefferson. The debate was moderated by mathematician Max Newman and philosopher of science Richard Braithwaite. Braithwaite, who argued that in order to create a real thinking machine, "it is necessary to equip the machine with something like a set of physical needs," stated: "Man's interests are determined largely by his passions, desires, motivations and instincts." Newman chimed in, saying that machines "have fairly limited needs and they can't blush when they're embarrassed." Jefferson went even further, repeatedly using the term “sexual needs” as an example and referring to human “emotions and instincts, such as those relating to sex.” “Man is a victim of sexual desires,” he said, “and can make a fool of himself.” He talked so much about how sexual needs affect human thinking that the editors BBC cut some of his statements from the broadcast, including the statement that he would not believe that a computer could think until he saw him touch a female computer's leg.

Turing, who was still closeted about his homosexuality, fell silent during this part of the discussion. In the weeks leading up to the recording of the broadcast on January 10, 1952, he committed a series of acts that were so purely human that a machine would find them incomprehensible. He just finished scientific work, and then wrote a story about how he was going to celebrate this event: “It had been quite a long time since he “had” someone, in fact since last summer, when he met that soldier in Paris. Now that his work was done, he could rightfully believe that he had earned the right to have a relationship with a gay man, and he knew where to find a suitable candidate."

In Manchester, on Oxford Street, Turing found a nineteen-year-old homeless man named Arnold Murray and began a relationship with him. When he returned from BBC After recording the show, he invited Murray to move in with him. One night, Turing told young Murray about his idea of ​​playing chess against a sneaky computer that he could defeat, causing it to alternate between anger, joy, and smugness. The relationship became more difficult in the following days, and one evening Turing returned home to find that he had been robbed. The perpetrator turned out to be Murray's friend. Turing reported the incident to the police, he was forced to eventually tell the officers about his sexual relationship with Murray, and Turing was arrested for "indecent behavior".

At his trial in March 1952, Turing pleaded guilty, although he made it clear that he felt no remorse. Max Newman was called to the stand as a witness to testify to the character of the defendant. Convicted and disqualified, Turing had to make a choice: prison or release, subject to hormone therapy through injections of synthetic estrogen, which kills sexual desire and likens a person to a chemically controlled machine. He chose the latter and took the course for a year.

At first Turing seemed to take it all in stride, but on June 7, 1954, he committed suicide by biting into an apple laced with cyanide. His friends noted that he always liked the scene from Snow White in which evil fairy dips an apple into a poisonous brew. He was found in his bed, foaming at the mouth, cyanide in his body, and a half-eaten apple lying next to him.

Can machines do this?

John Bardeen (1908–1991), William Shockley (1910–1989), Walter Brattain (1902–1987) at Bell Labs, 1948

First transistor manufactured at Bell Labs

Colleagues including Gordon Moore (seated left) and Robert Noyce (standing center with a glass of wine) toast William Shockley (at the head of the table) on the day he was awarded the Nobel Prize, 1956.

This text is an introductory fragment. From the book Thoughts of a Sled Dog author Ershov Vasily Vasilievich

The machine that I fly in the air is called a “medium-haul passenger aircraft Tu-154.” But how in English language the word "ship" - female, so we, pilots, talk about our own plane: “she”, “machine”. Our nurse. This alone implies that we,

From the book Memoir Prose author Tsvetaeva Marina

GONCHAROVA AND THE MACHINE In our painting, everything has been sung so far. Goncharova of nature, people, peoples, with all the antiquity of village blood in the recentness of noble veins, Goncharova - village, Goncharova - antiquity. Goncharova-wood, ancient, rustic, wooden, wooden,

From the book Moscow Prisons author Myasnikov Alexey

“You can’t think like that.” A bearded man walks thoughtfully along the reception building of the city prosecutor’s office. Long strands fall from his balding head, merging with the gray streak of his large, black-brown beard. Tenacious eyes carefully examine the ancient ornament of the facade. So busy with this

From the book Articles from the newspaper “Izvestia” author Bykov Dmitry Lvovich

From the book Volume 4. Book 1. Memoirs of contemporaries author Tsvetaeva Marina

Goncharova and the machine In our painting, everything has been sung so far. Goncharova of nature, people, peoples, with all the antiquity of village blood in the recentness of noble veins, Goncharova - village, Goncharova - antiquity. Goncharova - wood, ancient, rustic, wooden,

From the book The Journey of a Rock Amateur author Zhitinsky Alexander Nikolaevich

TIME MACHINE The Moscow group TIME MACHINE appeared at the turn of 1968–1969, while its members were still in school. One of the first stable lineups included A. Makarevich (guitar, vocals), A. Kutikov (bass), S. Kawagoe (organ), Yu. Borzov (drums). First to the repertoire

From the book Business is business: 60 true stories about how simple people started their own business and succeeded author Gansvind Igor Igorevich

From the book Confession of Four author Pogrebizhskaya Elena

Chapter Three Thinking and Suffering, or Whom Has Russian Philosophy Lost? Personally, I liked to think about myself that I was an unsentimental person. And if I had some “earless” dried flowers between the yellowed pages, then all this was an effort of will a long time ago

From the book Melancholy of a Genius. Lars von Trier. Life, movies, phobias by Thorsen Niels

Dream Car He turns the key in the lock and the golf cart starts with a slight electric whir. Then he turns the car around, turns off the road and, with a confident hand, jerks us between the red and yellow buildings. “They must have had fun when they were developing

From the book Reflections of the Comandante by Castro Fidel

Killing Machine Sunday is a good day for reading science fiction. It has been announced that the CIA intends to declassify hundreds of pages of material about its illegal activities, including plans to assassinate foreign heads of government. Suddenly the publication of these

From the book Air Route author Sikorsky Igor Ivanovich

What an airplane with one engine can give and what it cannot give After the first airplanes took off in Europe, the business of flying began to develop very quickly and successfully. Railways it took several decades to come into use in

author Isaacson Walter

Can a machine think? When Alan Turing was thinking about building a stored-program computer, he noted a statement made by Ada Lovelace a century earlier in her final “Note” to Babbage's description of the Analytical Engine. She

From the book Innovators. How a few geniuses, hackers and geeks created a digital revolution author Isaacson Walter

“How We Can Think” The idea of ​​creating a personal computer that everyone could have in their home came to Vannevar Bush back in 1945. He built a large analog computer at the Massachusetts Institute of Technology (MIT) and established collaboration between

From the book Aria of Margarita author Pushkina Margarita Anatolyevna

“How We Can Think” The idea of ​​​​creating a personal computer that everyone could have at home came to the mind of Vannevar Bush back in 1945. He built a large analog computer at the Massachusetts Institute of Technology (MIT) and established collaboration between

From the book False Treatise on Manipulation. Fragments of the book by Blandiana Ana

DEATH MACHINE (music by S. Terentyev) I think no one will hear this song in the form in which it was recorded for the album “Chimera”. At the very least, it will appear on some collection. Most likely, Terenty will creatively rework it, slow it down, put it through a meat grinder

From the author's book

Car at the gate I don’t remember exactly the moment when it appeared in front of our gate - in those days when there were much fewer cars in Bucharest than now, and there were plenty of parking spaces on the street - this white Skoda, and in it a woman about thirty or forty years old, strong

Can a machine think?

It is not entirely clear how a computer can do anything that is not “in the program”? Is it possible to command anyone to reason, guess, draw conclusions?

Opponents of the thesis about “thinking machines” usually consider it sufficient to refer to a well-known fact: a computer in any case does only what is specified in its program - and, therefore, will never be able to “think”, since “thoughts according to the program” are no longer possible count as "thoughts".

This is both true and false. Strictly speaking, indeed: if the computer does not do what it says this moment prescribed to him by the program, then he should be considered spoiled.

However, what appears to be a “program” to a person and what appears to be a program to a computer are very different things. No computer can carry out the grocery shopping “program” that you put in your ten-year-old son’s head—even if that “program” includes only completely unambiguous instructions.

The difference is that computer programs consist of a huge number of much smaller, private teams. Tens and hundreds of such microcommands make up one step, thousands and even millions make up the entire grocery shopping program in the form in which a computer could execute it.

No matter how ridiculous such petty regulation may seem to us, for a computer this method is the only applicable one. And the most amazing thing is that it gives the computer the opportunity to be much more “unpredictable” than is commonly believed!

In fact: if the entire program consisted of one order to “go grocery shopping,” then the computer, by definition, would not be able to do anything else - it would stubbornly go to the supermarket, no matter what was happening around. In other words, although “human” intelligence is required to understand a short program, the result of such a program - if it were executed by a computer rather than a person - would be very strictly determined.

We, however, are forced to give computers much more detailed instructions, determining their slightest step. At the same time, we have to add instructions to the program that are not directly related to this task. So, in our example, the robot needs to be told the rules for crossing the street (and the rule “if a car is coming at you, jump to the side”).

These instructions must necessarily include checking certain conditions for making decisions, seeking information (about the weather, about the location of stores) to certain databases, comparing the importance of various circumstances, and much more. As a result, a computer with such a program has many more "degrees of freedom" - there are many places in which it can deviate from the path to the final goal.

Of course, in the overwhelming majority of cases, these deviations will be undesirable, and we try to create conditions for the computer to operate in which the risk of a “car jumping out from around the corner” would be minimal. But life is life, and it is impossible to foresee all conceivable surprises. That is why a computer is capable of surprising both with an unexpectedly “reasonable” reaction to seemingly unpredictable circumstances, and with incredible “stupidity” even in the most ordinary situations (more often, unfortunately, the latter).

It is the construction of complex programs based on a detailed analysis of the smallest steps that make up the human thinking process that constitutes the modern approach to creating “thinking machines” (at least, one of the approaches). Of course, complexity isn't everything. And yet, among the scientists dealing with this problem, few doubt that “smart” programs of the 21st century will differ from modern ones, primarily in their immeasurably greater complexity and the number of elementary instructions.

Many modern systems information processing is already so complex that some features of their behavior simply cannot be deduced from the programs themselves - they have to be literally investigated by conducting experiments and testing hypotheses. And vice versa - many features of intelligent human activity, which at first glance seem almost like “insights from above,” are already quite well modeled by complex programs consisting of many simple steps.

Alan Turing published a large article, which later became a textbook: Computing Machinery and Intelligence. The article is often translated into Russian as follows: Can a machine think? In the section of the article “Opposing points of view on the main issue,” the author discussed various objections, myths associated with artificial intelligence, modeling of creative processes and gave his comments...

1. Theological objection. “Thinking is a property of the immortal soul of man, God gave an immortal soul to every man and every woman, but did not give a soul to any other animal or machine. Therefore, neither animal nor machine can think.”

I cannot agree with anything that has just been said, and I will try to argue using theological terms. I should find this objection more convincing if animals were placed in the same class with men, for, in my opinion, there is a greater difference between the typical animate and the typical inanimate than between man and other animals. The arbitrary character of this orthodox point of view will become still clearer if we consider in what light it may appear to a person professing some other religion. How, for example, will Christians react to the point of view of Muslims who believe that women do not have souls? But let's leave this question and turn to the main objection. It seems to me that from the above argument with reference to the soul of man follows a serious limitation on the omnipotence of the Almighty.

Even though there are certain things that God cannot do, such as making one equal two; but who among the believers would not agree that He is free to infuse a soul into an elephant if He finds that the elephant deserves it? We can look for a way out in the assumption that He uses His power only in combination with mutations that improve the brain so much that the latter is able to satisfy the requirements of the soul that He wants to infuse into the elephant. But the same can be said in the case of machines. This reasoning may seem different only because in the case of machines it is more difficult to “digest.” This essentially means that we consider it highly unlikely that God would consider the circumstances suitable for giving a soul to a machine, i.e. this is really about other arguments that are discussed in the rest of the article. In attempting to build thinking machines we act more disrespectfully towards God by usurping His power to create souls than we do in procreating offspring; in both cases we are only instruments of his will and produce only refuges for souls, which again God creates.

All this, however, is empty speculation. Whatever these kinds of theological arguments are made for, they do not make much impression on me. However, in the old days such arguments were found very convincing. During times Galilee They believed that such church texts as “The sun stood in the middle of the sky and did not hurry towards the west almost the whole day” (Joshua 10:3) and “You have established the earth on solid foundations; it will not be shaken forever and ever” (Psalm 103:5), sufficiently refuted the theory Copernicus. In our time, this kind of evidence seems groundless. But when the modern level of knowledge had not yet been achieved, such arguments produced a completely different impression.

2. Objection from the “ostrich” point of view “The consequences of machine thinking would be too terrible. Let us hope and believe that machines cannot think.”

This objection is rarely expressed in such open form. But it sounds convincing to most of those who even think of it. We are inclined to believe that man is intellectually superior to the rest of nature. It would be best if it could be proven that man is necessarily the most perfect being, because in this case he may be afraid of losing his dominant position. It is clear that the popularity of the theological objection is due to this feeling. This feeling is probably especially strong among intelligent people, since they value the power of thinking more highly than other people, and are more likely to base their belief in the superiority of man on this ability. I do not believe that this objection is sufficiently significant to require any rebuttal. Consolation would be more appropriate here; Should we not suggest looking for it in the doctrine of the transmigration of souls?

3. Mathematical objection. There are a number of results from mathematical logic that can be used to show that there are certain limitations on the capabilities of discrete state machines. The most famous of these results, Gödel's theorem, shows that in any sufficiently powerful logical system it is possible to formulate statements that within that system can neither be proven nor disproved, unless the system itself is consistent. There are other, in some respects similar, results due to Church, Kleene, Rosser And Turing. The result of the latter is especially convenient for us, since it relates directly to machines, while other results can only be used as a relatively indirect argument (for example, if we began to rely on the theorem Gödel, we would also need some means of describing logical systems in terms of machines and machines in terms of logical systems). Turing's result refers to such a machine, which is essentially a digital computing machine with unlimited memory capacity, and establishes that there are certain things that the machine cannot do. If she is designed to answer questions, as in the "imitation game," then there will be questions to which she will either answer incorrectly or fail to answer at all, no matter how much time she is given to do so. There can, of course, be many such questions, and questions that cannot be answered by one machine can be answered satisfactorily by another. We are, of course, assuming here that the questions are of the yes-or-no type rather than of questions such as: “What do you think about Picasso?. The following types of questions are those that we know a machine cannot answer: “Consider a machine characterized by the following: ...Will this machine always answer “yes” to every question?” If in place of the dots we put a description (in some standard form, for example, similar to the one we used in Section V) of such a machine, which stands in some relatively simple relation to the machine to which we address our question, then we can show that the answer to this question will either be incorrect or not exist at all. This is the mathematical result; they claim that it proves the limitations of machines, which are not inherent in the human mind. […]

The answer to this objection, briefly, is as follows. It has been established that the capabilities of any particular machine are limited, but the objection being examined contains an unsubstantiated assertion, without any evidence, that such limitations do not apply to the human mind. I don't think this aspect of the matter can be so easily ignored. When one of these kinds of machines is asked a relevant critical question and it gives a certain answer, we know in advance that the answer will be wrong, and this gives us a feeling of a certain superiority. Isn't this feeling illusory? Undoubtedly, it can be quite sincere, but I do not think that too much importance should be attached to it. We ourselves give incorrect answers to questions too often for the feeling of satisfaction that arises in us at the sight of the fallibility of machines to be justified. In addition, the feeling of superiority can only apply to a machine over which we have won our - in essence, very modest - victory. There can be no talk of a simultaneous triumph over all machines. So, in short, for any given machine there can be people who are smarter than it, but in this case there can again be other, even smarter machines, etc. I think that those who share the view expressed in the mathematical objection will generally be willing to accept the "imitation game" as a basis for further consideration. Those who are convinced of the validity of the two previous objections will probably not be interested in any criterion at all.



top