• Artificial intelligence, a potential threat to the survival of humanity. Could a super calculator be a threat? Possible threat from artificial intelligence

    31.07.2019

    The futuristic Future Design Club, created under the auspices of the business publication Invest-Foresight, is open. The club brought together experts in various fields of knowledge to analyze future scenarios. The first meeting took place at the Institute of Philosophy of the Russian Academy of Sciences and was held in conjunction with the new seminar “Philosophy of the Digital Age” of the institute itself. Came under the radar of futurologists and philosophers artificial intelligence(AI) and its impact on society and individuals.

    What to expect from AI, what limitations and dangers do innovative information systems have, said a biophysicist, leading expert at the Institute of Biology of Aging Igor Artyukhov.

    The scientist noted that terminological chaos reigns in the region. The most commonly used formulation is a 70-year-old formulation given by an American engineer, the author of the term “artificial intelligence.” John McCarthy. He defined AI as a certain system, device or network of them that is capable of solving intellectual problems. Igor Artyukhov emphasized that intelligence is not synonymous with intelligence, but characterizes the ability to work with knowledge and data.

    There are also many classifications of algorithms. Until recently, according to Igor Artyukhov, expert developments (top-down) dominated, but they did not meet the expectations of either creators or users, and the AI ​​winter began. The revival was caused by the development of neural networks, which entered an active phase in 2015-2016. These developments were inspired by research into the human brain. Neural networks largely repeat general principles functioning of the brain, although artificial neurons may differ from natural ones. Today there is a “zoo” of neural networks, and most of them have recently been obtained using deep learning technology.

    "Intellectual" revolution of the 17th year

    The topic of AI became a huge hit in 2016 when the AlphaGo program beat the world champion in the game Go, which is considered much more difficult than chess. This happened 10 years earlier than predicted. The achievements of Google's DeepMind program have moved the debate beyond "is AI possible?" in the plane of “what is he capable of?”

    Today, AI can automate any human function with the same quality or higher, says a club member, Doctor of Physics and Mathematics. Yuri Vizilter.

    « In 2017, all the fundamental “holes” on the path to creating full-fledged AI were closed. Previous forecasts said that this threshold would be crossed in 2025-2030. But this happened already in April last year,” Yuri Vizilter emphasized. – There are two types of AI - one tries to imitate human reasoning and is a transparent box in which there is logic, there is cognition, there is work with structured models and with languages. The second type is deep networks that are capable of learning, or Black Box. Previously it was impossible to connect in a convenient way these two parts of AI. Last year, deep neural networks appeared that are capable of writing programs in logical languages, capable of working with logical inference bases and with structures».

    Fear of the artificial

    The areas in which artificial intelligence can be used are multiplying. Programs develop new drug formulas, conduct complex genetic tests, “control” unmanned vehicles and ships... In the foreseeable future, we will see artificial personalities, the prototype of which today are Siri and similar programs. Experiments are already underway to create global AI on the blockchain, that is, distributed computing will be used to solve super complex problems, including on private computers and devices. The emergence of quantum artificial intelligence is not far off. In theory, quantum computers have surpassed the reach of conventional computers: they are capable of forming quantum neural networks. If the developers' ideas are implemented, this will lead to the emergence of AI of a new quality.

    Is it possible for AI to get out of control? Igor Artyukhov answers this question as follows:

    « Yes, even now AI sometimes works in a way that is incomprehensible to developers. But people do not always act according to generally accepted logic.».

    According to the biophysicist, many of the dangers of AI are far-fetched and are just a projection of human instincts formed by Darwinian evolution onto machine intelligence. Igor Artyukhov believes that the instinct of self-preservation is alien to AI. The emergence of “bad” AI can be caused by software errors (and they exist in any code and are corrected during the life of the algorithm), learning errors and malicious intent, i.e. The danger is not AI, but human stupidity.

    The meeting participants came to the conclusion that the task of developing ethics for AI and training teachers for robots is becoming paramount.


    Human digital transformation

    One of the most common fears is that AI will lead to widespread unemployment. Chief editor of the Invest-Foresight magazine, candidate of cultural sciences Konstantin Frumkin tried to answer the question “What niche in the labor market will AI leave for humans?” In his opinion, the beneficiary of the technological revolution of the 20th century - the urban intellectual - is flying into the same abyss into which the artisan or peasant fell with the beginning of industrialization.

    « In the near future, we will see the displacement of humans from the sphere of mental work by artificial intelligence,” Konstantin Frumkin is sure. – We are already seeing a transformation in human abilities, cognitive style and thinking: this is how adaptation to the digital age occurs. The symbol of this adaptation was the concept of “clip thinking”“».

    However, there is no evidence that this adaptation will allow a person to find exactly those niches that will be left for him by artificial intelligence. The most important area that, according to Konstantin Frumkin, should not be left to robots is the field of goal setting. As government digitalizes, setting priorities and values ​​for AI becomes inseparable from policy and law. This occupation, according to Konstantin Frumkin, can provide mass employment in the future.

    Artificial intelligence, human body

    The future is no longer possible to predict, since the speed of change is very high, says Yuri Vizilter. However, a professor at the Higher School of Economics and a data scientist Leonid Zhukov invited the meeting participants to discuss several development scenarios.

    « I see two scenarios for the development of AI in the relatively near future: transhumanistic (technologization of natural man) and purely technical (creation of autonomous artificial intelligence, AAI), initially artificial, noted Leonid Zhukov. – The second way is what science fiction writers scare humanity with, painting images of machine civilization displacing human civilization. However, since the mystery of human consciousness has not yet been solved, doubts remain as to whether the creation of such intelligence is possible in principle. Regardless of whether AGI will be created or not, it is not difficult to imagine today the development of technology, both machine and bio, to such a degree of perfection that a person, having retained his brain, will receive a completely artificial body».

    As Leonid Zhukov believes, one of the fantastic ways to create AGI is based on the idea of ​​​​rewriting human consciousness onto an artificial medium.

    « If this kind of technology becomes possible, it is not difficult to imagine a completely virtual world, where, presumably, part of humanity will want to move (or travel through this virtuality, as, in fact, computer players already travel through game worlds today)“, the scientist reflects.

    Participants in the first meeting of the “Designing the Future” club outlined the contours of future discussions. However, the club will exist in a virtual environment, and all performances can be found on the Invest Foresight website.

    Artificial intelligence is a threat to humans, warns Stephen Hawking. Artificial intelligence created by man can turn into the biggest disaster(!), competing with human civilization.

    Artificial intelligence, robots of the future - human support

    Electronic intelligence can create its own machine civilization, thereby becoming one of the most serious threats to humanity.

    The prediction of danger comes from the famous physicist Stephen Hawking (who discovered the world of the universe). For our time, this is of course a potential threat, but one “fine day” in the future, artificial intelligence may develop “its own will.” Now it's time to seriously think about this issue.

    The physicist once again issued a warning: artificial intelligence can develop into a perfect thinking structure. So complex and intelligent that she will master the ability to grow and understand the world in accordance with her own will, which may conflict with the plans of humanity.

    This could lead to the emergence of powerful weapons and provoke the loss of controlled territory by humanity. — Professor Hawking calls on researchers to carefully study the problem of the behavior of artificial intelligence and its possibilities in the future.

    It must be said that Professor Hawking does not dismiss the idea of ​​artificial intelligence into a negative area. The scientist points out that if we do our homework and research well enough, we can .

    With an assistant like AI, we can achieve a better way of life, says the physicist. Artificial intelligence can help humanity eradicate disease and poverty.

    Professor Hawking spoke at the opening of The Leverhulme center, touching on both the usefulness of machine intelligence and its negative aspects. The center was created for the future of intelligence, designed to conduct research and study the implications rapid development artificial intelligence.

    It should be recalled that for Stephen Hawking, 100 years is a moment. In fact, intelligent AIs are not even a matter of the next hundred years, unless someone brings a processor from 2135.

    The Leverulm Center for the Future of AI will bring together collaboration between several universities in the United Kingdom and the United States. The idea is to create an interdisciplinary research community.

    The team plans to work closely with business and government to try, among other tasks, to determine the short- and long-term risks and benefits of betting on artificial intelligence. The director of the Center, Huw Price, assured: the creation of intelligent machines is an important stage of humanity, and the center will try to make “the best future.”

    In addition to the breadth of its research, the Center will analyze the implications of the rapid development of intelligent machines such as robots. Robots offering solutions to problems Everyday life, create risks and ethical dilemmas for humanity. Many people, not trusting electronics, are afraid of AI, and besides, digital intelligence can surpass human intelligence and control human life.

    I believe that there is no profound difference between what can be achieved with a biological brain and what can be achieved with a computer. Thus, it follows that, theoretically, computers can imitate human intelligence - surpass it. S. Hawking.

    Professor Hawking believes that the potential benefits of AI in our lives are great. Such a technological revolution could help humanity reverse some of the damage done to the planet. “Success in creating AI could be the biggest event in the history of civilization,” says Professor Hawking.

    But this could also be the last step in human history, unless of course we learn how to avoid the risks, because along with the benefits, AI can pose dangers: powerful weapons, new ways for the few to oppress the many. Ultimately, this may end with the dominance of iron bodies over biological ones, causing great catastrophes in the future.

    What if AI, and we are talking about intelligence with the ability to initiate choices of behavior, comes into conflict with the life aspects of a person? After all, then the obedient iron assistant in the kitchen can retrain as a dictator of conditions!

    — The development of powerful AI will either be the best or the worst thing that will ever happen to humanity. We don't know what the future holds, says Professor Hawking. That's why, in 2014, I and many others called for more research in this area. I am very glad that someone heard me, concluded Professor Hawking at the opening of the center.

    The popular American online magazine Wired published an extremely interesting text by Kevin Kelly, in which he thoroughly criticizes the idea of ​​the dangers of artificial intelligence. "NI" publishes a translation of this material, made using the Newochem telegram channel with minor reductions.

    “I have heard that in the future, computers with artificial intelligence will become so much smarter than humans that they will deprive us of jobs and resources, and as a result, humanity will come to an end. But is it? I get asked a similar question every time I lecture about AI.

    The questioners are more serious than ever, their concern partly due to the concerns of some experts who are tormented by the same question. These include the smartest of our contemporaries - Stephen Hawking, Elon Musk, Max Tegmark, Sam Harris and Bill Gates.

    They all admit the possibility of such a scenario. At a recent AI conference, a panel of nine of the most enlightened gurus in the field came to the unanimous conclusion that the creation of superhuman intelligence is inevitable and is just around the corner.

    And yet, the scenario of the enslavement of humanity by artificial intelligence is based on five assumptions, which, upon closer examination, turn out to be unproven. These statements may be true in the future, but are currently unsubstantiated.

    Here they are:

    1. Artificial intelligence is already becoming smarter than the human mind, and this process is exponential.
    2. We will develop a general purpose AI similar to ours.
    3. We can create human intelligence from silicon.
    4. Intelligence can be limitless.
    5. The creation of a grandiose superintelligence will solve most of our problems.

    In contrast to these orthodox postulates, I can cite the following proven heresy:

    1. Intelligence is not one-dimensional, so “smarter than a person” is a meaningless concept.
    2. People are not endowed with general-purpose intelligence, which is not a threat to artificial intelligence either.
    3. Competition between humans and computers will be limited by costs.
    4. Intelligence is not limitless.
    5. Creating AI is only part of the progress.

    If the belief in the enslavement of humans by artificial intelligence is based on five unfounded assumptions, then this idea is more similar to religious beliefs - myths. In the following chapters, I will supplement my counterarguments with facts and prove that superhuman artificial intelligence is indeed nothing more than a myth.

    The most common misconception about AI comes from the equally popular view of natural intelligence that it is one-dimensional. In the hard sciences, many depict intelligence the way Nick Bostrom did in his book Superintelligence - literally as a one-dimensional line graph with increasing amplitude.

    At one end there are beings with the lowest level of intelligence, for example, small animals, and at the other - geniuses, as if the level of intelligence is no different from the level of sound measured in decibels. Of course, in this case, it is easy to imagine a further increase in which the level of intelligence exceeds the highest point of the graph and even goes beyond it.

    This model is topologically equivalent to a ladder, on the steps of which the levels of intelligence are arranged in ascending order. Less intelligent animals occupy the lower rungs of the ladder, and higher-level artificial intelligence will inevitably be positioned above us. The time frame for when this happens does not matter. Much more important are the steps of the hierarchy themselves - the metrics of growing intelligence.

    The problem with this model is that it is mythical, just like the evolutionary ladder model. Before Darwinism Live nature seemed to be a ladder of living beings, where man occupied a step above more primitive animals.

    And even after Darwin, the ladder of evolution remains one of the most common concepts. It demonstrates the transformation of fish into reptiles, them into mammals, and primates into humans. Moreover, each subsequent creature is more developed (and, of course, more intelligent) than its predecessor. Thus, the ladder of intellect correlates with the ladder of the universe. However, both models reflect completely anti-scientific views.

    A more accurate diagram for natural evolution would be a disk radiating outward, as in the image above. This structure was first developed by David Hillis from the University of Texas, based on DNA. This genealogical mandala begins at the center with the most primitive forms of life and then branches outward. Time moves forward, so the most recent forms of life are located around the perimeter of the circle.

    This image highlights an inestimable fact about evolution of fundamental importance - all living things have evolved equally. Man is located on the outer part of the disk along with cockroaches, mollusks, ferns, foxes and bacteria.

    All species, without exception, have gone through an unbroken chain of successful reproduction lasting three billion years, which means that bacteria and cockroaches are just as highly evolved as humans. There are no stairs.

    Likewise, there is no ladder of intelligence. Intelligence is not one-dimensional. This is a complex various types and ways of knowing, each of which is continuous. Let's do a simple exercise to measure intelligence in animals. If intelligence were one-dimensional, we could easily rank in order of increasing intelligence the parrot, dolphin, horse, squirrel, octopus, blue whale, cat and gorilla.

    There is currently no scientific evidence for the existence of such a sequence. The reason for this may be the lack of differences between the level of intelligence of certain animals, but this is also unfounded.

    Zoology is rich in examples of striking differences in the thinking of animals. Maybe all animals are endowed with “general purpose” intelligence? Perhaps, but we don't have a single tool to measure this type of intelligence. However, we have many measurement systems for different types of cognition.

    Instead of a single decibel line, it would be more appropriate to depict intelligence as a diagram of probability space, as in the visualization of possible shapes that was created by Richard Dawkins' algorithm. Intelligence is a combinatorial continuum. Many nodes, each of which is a continuum, create a complex of colossal diversity in many dimensions. Some types of intelligence can be very complex, with a large set of sub-nodes of thinking. Others are simpler, but more extreme, they reach the extreme point of space.

    These complexes, which mean to us different varieties of intelligence, can be perceived as symphonies performed on different types musical instruments. They differ not only in volume, but also in melody, color, tempo, etc. You can perceive them as an ecosystem. In this sense, the various components of the thinking nodes are interdependent and are created from each other.

    As Marvin Minsky said, human consciousness is a society of reason. Our thinking is a complete ecosystem. Our brain has many ways of knowing that perform different mental functions: deduction, induction, emotional intelligence, abstract and spatial thinking, short-term and long-term memory.

    The entire human nervous system is a part of the brain with its own level of cognition. In fact, the thinking process is carried out not by the brain, but by the entire human body.

    All varieties of thinking vary both between species and between members of the same species. A squirrel can remember the exact location of several thousand acorns for years, which is completely incomprehensible to humans. In this type of thinking, humans are inferior to squirrels. Squirrel intelligence is a combination of this superpower with other forms of intelligence in which humans are superior to squirrels. In the animal kingdom one can find many examples of the superiority of certain forms of animal intelligence over human intelligence.

    The same situation has developed with artificial intelligence, which in some areas is already superior to human intelligence. Any calculator is a mathematical genius, and the Google search engine's memory is already in some ways better than our own.

    Some AIs perform mental activities that we are not capable of. Remembering every word on six billion web pages is a daunting task for humans and a breeze for search engines. In the future, we will create completely new ways of thinking that are not accessible to humans and do not exist in nature.

    The inventors of aircraft were inspired by natural flight - the flapping of wings. However, later a fixed wing with propellers attached to it was invented, and this was a completely new principle of flight, not found in nature.

    This is how we invent new ways of thinking that no other species is capable of. Most likely, these will be principles that apply only to highly specialized problems: for example, new logical constructs that are needed only in statistics and probability theory.

    A new kind of thinking will help solve problems that the human mind cannot cope with. Some of the toughest questions in business and science require two-step solutions. The first stage is to invent new way natural thinking. The second is to start searching for answers together with AI.

    People will begin to consider AI smarter than themselves if it can help solve previously unsolvable problems. In fact, AI's thinking is not better than ours, it's just different. I believe that the main benefit of artificial intelligence is that it is able to think like an alien, and this alienness is its main advantage.

    Moreover, we will create complex AI “communities” with different ways of thinking. They will be so complex that they will be able to solve problems that we cannot solve. Therefore, some people will mistakenly decide that AI complexes are more intelligent than humans. But we don’t think that the Google search engine is smarter than a person, although his memory is better than ours.

    It is likely that these artificial intelligence complexes will surpass us in many areas, but none of them will surpass humans everywhere at once. A similar situation occurred with our physical strength. Two hundred years have passed since the Industrial Revolution, and no machine has become stronger than the average person in any respect, although machines as a class are significantly superior to humans in running speed, lifting weights, cutting precision, and other activities.

    Despite the increasing complexity of the structure of AI, it is impossible to measure it using the methods of modern science. We don't have the tools to tell whether a cucumber or a Boeing 747 is more complex, nor do we have a way to measure the differences in their complexity. That is why we still do not have precise criteria for intellectual capabilities.

    Over time, it will become increasingly difficult to establish which is more complex, and, accordingly, smarter: intelligence A or intelligence B. There are many unexplored areas of mental activity, and mainly this prevents us from understanding that the mind is not one-dimensional.

    The second misconception about the human mind is that we believe that our mind is universal. This widespread belief influences the path we take toward the creation of artificial general intelligence (AGI), which was once heralded by AI experts.

    However, if we think of the mind as a vast space of possibilities, we cannot speak of a state of general purpose. The human mind does not occupy some abstract central place, and other specialized types of mind do not revolve around it.

    Rather, human intelligence is an extremely specific type of intelligence that has evolved over millions of years for the survival of our species on this planet. If we wanted to place our intellect among all other possible types of intelligence, it would end up somewhere in a corner - like our world itself, huddled on the edge of a huge galaxy.

    Of course, we can imagine, and sometimes invent, a certain type of thinking similar in its characteristics to Swiss knife. He seems to cope with many tasks, but not with a bang.

    This also includes a technical rule that all things must obey, regardless of whether they were created intentionally or came into being naturally: “It is impossible to optimize all dimensions. You can only find compromises. A general purpose multifunction machine cannot outperform specialized functions.”

    The do-it-all mentality cannot work on par with specialized performers of specific tasks. Because we consider our consciousness to be a universal mechanism, we believe that cognition should not be based on compromise and that it is possible to invent artificial intelligence that demonstrates the maximum degree of efficiency in all types of thinking.

    However, I see no evidence for this claim. We simply have not yet created a sufficient number of varieties of consciousness that would allow us to see the full picture (and for now we prefer not to consider animal consciousness through the prism of one parameter as a separate type of thinking with variable amplitude).

    Part of this belief that our thinking is as universal as possible stems from the concept of universal computing. This assumption was called the Church-Turing thesis in 1950. It states that all calculations that satisfy certain parameters are equivalent.

    Thus, there is a base that is universal for all calculations. Regardless of whether the calculation is carried out by a single machine with many fast mechanisms, by a machine with less high productivity, or even in a biological brain, we are talking about the same logical process. This in turn means that we can simulate any computational process (thinking) using any machine that can perform "universal" computations.

    Using this principle, proponents of the singularity justify their expectation that we will be able to design a silicon-based artificial brain that can accommodate human consciousness, that we will be able to create artificial intelligence that will think the same way as a person, but much more efficiently. These hopes should be treated with a certain degree of skepticism, since they are based on an incorrect interpretation of the Church-Turing thesis.

    The starting point of this theory is: “Under conditions of unlimited memory and time, all calculations are equivalent.” The problem is that in reality, no computer has the characteristics of infinite memory or time. When you do calculations in real world, the time component is extremely important, to the point that it is often a matter of life and death.

    Yes, all types of thinking are equal, if you exclude the time aspect. Yes, it is possible to simulate human thought in any matrix if you choose to ignore time or the limitations of space and memory in reality.

    However, if you include the time variable in this equation, you will have to significantly change the formulation of the principle: “Two computing systems operating on radically different platforms will not perform equivalent calculations in real time.”

    This principle can be reformulated as follows: “The only method of obtaining equivalent ways of thinking is to carry them out on the same basis. The physical medium on which you perform your calculations - especially as their complexity increases - greatly influences the type of thinking in real time."

    Continuing the logical chain, I will assume that the only method of creating a type of thinking that is as close as possible to human is to carry out calculations using matter that closely resembles our gray matter.

    This means that we can also assume that voluminous, complex artificial intelligence created on the basis of dry silicon will generate clumsy, complex and non-human types of thinking. If artificial intelligence could be created that runs on wet matter using human-like artificial neurons, the thought process of such AI would be much closer to ours.

    The advantages of such a “wet” system are proportional to the proximity of the physical media used to human ones. The creation of such a substance would require enormous material costs in order to achieve a level at least similar to what is inherent in us by nature. And we can create a new person this way - we just have to wait 9 months.

    Also, as mentioned earlier, we think with our whole being, not just with our consciousness. Modern science has a wealth of data regarding how our nervous system influences, predicts, and adapts in our “rational” decision-making process. The more detail we look at the human body system, the more carefully we can subsequently recreate it. Artificial intelligence, operating on a substance radically different from ours (dry silicon instead of wet carbon), will also think differently.

    I don't think this feature is more of a "feature rather than a bug." As I argued in the second point of this article, the differences in AI's thought process are its main advantage. Here's another reason why it would be wrong to say that it is "smarter than the human brain."

    At the core of the concept of superhuman intelligence - and especially the theory of constant self-improvement of such AI - is a sincere belief in the limitlessness of intelligence. I have found no evidence for this claim.

    Again, the misconception of intelligence as a system defined by only one dimension contributes to the spread of this statement, but we must understand that it remains unfounded. There are no infinite physical dimensions in the universe - at least, they are not yet known to science.

    Temperature is not infinite - there are minimum and maximum values ​​of cold and heat. Space and time are also not unlimited, nor is speed. Perhaps the number axis can be called infinite, but all other physical parameters have their limits. Of course, the mind itself is also finite.

    The question arises: where are the limits of intelligence? We are accustomed to believing that the limit is somewhere far away, as “above” us as we are “above” the ants. Leaving aside the unresolved problem of one-dimensionality, how can we prove that we have not yet reached the limit? Why can't we be the crown of creation? Or maybe we have almost reached the limits of human capabilities? Why do we believe that intelligence is an endlessly evolving concept?

    It is better to perceive our intellect as one of the varieties of a huge number of types of thinking. Although every dimension of cognition and calculation has a limit, if there are hundreds of dimensions, then there are countless varieties of intelligence, but none is infinite in any dimension.

    As we create or encounter these countless variations on the theme of consciousness, we may experience them as beyond our capabilities. In my last book, The Inevitable, I outlined a list of some of these varieties that are inferior to us in one way or another. Below I will give a part of this list:

    A mind that is as close to human as possible, but has a higher reaction speed (the simplest artificial intelligence);

    A very slow mind, the main components of which are extensive storage space and memory;

    Universal intelligence, consisting of millions of individual consciousnesses acting in unison;

    A hive mind consisting of a large number of extremely productive intelligences, unaware that they are one;

    Borg supermind (a race of cyborgs with a collective mind, trying to assimilate into their collective all living things from the Star Trek series - approx. Newabout) - a multitude of highly functional intelligences, clearly aware that they are one whole;

    A mind created for the purpose of developing the personal consciousness of the wearer, but unsuitable for anyone else;

    A mind capable of imagining a more complex mind, but not being able to create it;

    A mind capable of one day successfully creating a more complex mind;

    A mind that can create a more complex mind, which in turn can create an even more complex mind, etc.;

    A mind that has quick access to its source code (can change the features of its functioning at any time);

    Superlogical mind, deprived of the ability to experience emotions;

    Standard mind, aimed at solving assigned problems, but incapable of introspection;

    A mind capable of introspection, but unable to solve the problems assigned to it;

    A mind whose development takes a long period of time, in need of a protective mind;

    An exceptionally slow mind, dispersed over a vast physical space, which appears "invisible" to forms of consciousness that respond more quickly;

    A mind capable of quickly and repeatedly reproducing copies of itself;

    A mind capable of reproducing its copies and remaining one with them;

    A mind capable of achieving immortality by moving from host to host;

    A fast, dynamic mind, capable of changing the process and nature of thinking;

    Nano-mind, the smallest independent unit (in size and energy output) capable of self-analysis;

    A mind specialized in creating scenarios and forecasting;

    A mind that never forgets anything, including incorrect information;

    Half machine, half animal;

    Half machine, half androgynous cyborg;

    A mind that uses in its work quantitative analysis that is incomprehensible to us.

    Today, some call each of these types of thinking superhuman AI, but in the future the diversity and alienness of these forms of intelligence will force us to turn to new dictionaries and study in detail the topic of thinking and intelligence.

    In addition, adherents of the idea of ​​superhuman AI assume that its level mental abilities will grow exponentially (although they still don't have a system for estimating this level). Perhaps they believe that the process of exponential development is already happening.

    Either way, there is no evidence of such growth today, no matter how you measure it. Otherwise, this would mean that the AI's mental abilities double over a period of time.

    Where is the proof of this? The only thing that is now growing exponentially is investment in the AI ​​industry. But the return on these investments cannot be described by Moore's law. AI does not become twice as smart in three years, or even in ten years.

    I asked many experts in the field of AI, but everyone agreed that we do not have criteria for intelligence. When I asked Ray Kurzweil, a true exponential wizard, where to find evidence of the exponential development of AI, he wrote to me that the development of AI is not an explosive process, but a gradual process.

    “To add a new level to the hierarchy requires both an exponential increase in computational power and an increase in the complexity of the algorithm... Thus, we should expect a linear increase in the number of conditional levels, since each of them requires an exponential increase in our own capabilities. There are not many levels of complexity left for us to achieve AI abilities of the neocortex (the main part of the human brain cortex, which is responsible for higher nervous functions - approx. Newwhat), so I still believe my assumptions about 2029 are correct.”

    Ray seems to be saying that it is not the power of artificial intelligence that is growing exponentially, but the efforts to create it, while their result is simply increased by one step each time. This is almost the opposite of the intelligence explosion hypothesis. This may change in the future, but AI is clearly not growing exponentially today.

    So when we imagine the “AI boom,” we should think of it not as an avalanche, but rather as a splitting into many new varieties. The result of technological progress will most likely be not a superman, but a superman. Beyond our knowledge, but not necessarily “above” it.

    Another widely accepted but largely unsupported myth about enslavement by superintelligence is that near-infinite intelligence can quickly solve all our problems.

    Many proponents of AI's rapid development expect it to create a boom in progress. I call belief in this “thinkism” (the term was translated by Vyacheslav Golovanov - approx. Newabout). This approach is based on the belief that progress is impeded only by insufficient thinking or intelligence. (I will also note that belief in AI as a panacea for all ills is mostly characteristic of people who themselves like to think.)

    Let's consider the issue of defeating cancer or prolonging life. These are problems that cannot be solved by thinking alone. No amount of thinking can figure out how cells age or how telomeres shorten. No intelligence, no matter how cool, can understand how it works human body, just by reading everything known scientific literature in the world and analyzing it.

    Super-AI will not be able to simply think about all the current and past experiments on splitting the atomic nucleus, and a day later come up with a ready-made recipe for thermonuclear fusion. To come from misunderstanding to understanding of any subject requires more than just thinking.

    In reality, there are a lot of experiments, each of which gives a whole bunch of contradictory data and requires further experiments to form the correct working hypothesis. Just thinking about it possible results will not give correct output data.

    Thinking (intelligence) is only part of the tool of science. Most likely only a small part. For example, we simply don't have enough data to come close to solving the problem of death. When working with living organisms, most of these experiments take time. Slow cell metabolism cannot speed up. It takes years, months or at least days to get results.

    If we want to know what happens to subatomic particles, it's not enough to just think about them. We have to build very large, very complex, very sophisticated physics models to find out. Even if the smartest physicists become a thousand times smarter than they are now, they won't learn anything new without the collider.

    There is no doubt that super-AI can accelerate the development of science. We can create computer models of atoms or cells, and we can speed them up in many ways, but there are problems that prevent simulations from making immediate progress.

    It is worth remembering that simulations and models can be examined faster than their subjects simply because they discard certain variables. This is the very essence of modeling. It is also important to note that such models take a long time to be tested, studied, and validated to ensure they are relevant to their subjects. Testing by experience cannot be accelerated.

    Simplified versions in simulation are useful for finding the most promising ways to speed up progress. But in reality, nothing is superfluous, everything matters to some extent - this is one big definition of reality. As models and simulations become more and more detailed, researchers are faced with the fact that reality runs faster than its 100% simulation.

    Here's another definition of reality: the fastest functioning version of all possible details and degrees of freedom. If you could model all the molecules in a cell and all the cells in a human body, the model wouldn't run as fast as the human body. No matter how thoughtfully you design such a model, you will still need to spend time experimenting, and it doesn't matter whether it's a real system or a simulation.

    To be useful, artificial intelligence must be introduced into the world, and in this world the necessary pace of innovation is changing quite quickly. Without first experiments, prototypes, mistakes and engagement with reality, the intellect can think, but it will not produce results. He will not make any immediate discoveries either a second, or an hour, or a year after he is called “smarter than man.”

    AI appears. Of course, the rate of discovery will accelerate as this AI becomes more complex, in part because foreign artificial intelligence will ask questions that no human would ask, but even very powerful (compared to us) intelligence does not guarantee immediate progress. Solving problems requires much more than just intelligence.

    The problems of cancer and life expectancy are not the only ones that intelligence alone cannot solve. A common misconception among proponents of the technological singularity is that if we create an AI that is smarter than humans, it will suddenly evolve and create an even smarter AI.

    The new artificial intelligence will think deeper and invent something even smarter, and so on until something god-like is invented. There is no evidence that thinking alone is enough to create new levels of intelligence. This type of contemplation is based on faith.

    However, there is ample evidence that in order to invent new effective intelligence it requires not only mental effort, but also experimentation, data, challenging questions, trial and error.

    I understand that I may be mistaken. We are still at an early stage. Perhaps we will discover a universal scale of intelligence or its infinity in all senses. There is a possibility of a technological singularity, because we know very little about what intelligence and self-awareness are. In my opinion, everything indicates that this is unlikely, but there is still a chance.

    However, I support OpenAI's broader goals: we must develop friendly AI and figure out how to give it values ​​that are self-replicating and aligned with ours.

    There is a possibility that superhuman AI could be harmful in the long term, but this idea is based on incomplete evidence and should not be taken seriously when it comes to science, politics or progress.

    An asteroid hitting Earth could destroy us, a possibility that exists (as confirmed by the B612 Foundation), but we should not consider such an outcome in matters of global warming, space travel, or city planning.

    The available evidence indicates that it is likely that AI will not be superhuman. He will have new varieties of thinking that are inaccessible to man, but without comprehensive application he will not become a god who will solve our main problems in an instant.

    Instead, it will become an aggregation of intellects with disabilities, will work better than us in areas unfamiliar to us, and together with us will be able to find solutions to both existing and new problems.

    I understand how attractive the idea of ​​superhuman and godlike AI is. He could become the new Superman. But, like Superman, he is a fictional character. Superman may exist somewhere in the universe, but it's highly unlikely. Be that as it may, myths can be useful and, once created, they do not disappear.

    The idea of ​​Superman will live forever. The idea of ​​superhuman AI and the singularity is emerging now and will never be forgotten. We need to understand what kind of idea it is: religious or scientific. If we explore the issue of intelligence, artificial or natural, we must clearly understand that our ideas about superhuman AI are just a myth.

    Tribes on the isolated islands of Micronesia first made contact with the outside world during World War II. The gods flew from distant lands, descended from heaven on noisy birds, brought gifts and flew away forever. Cults of the return of these gods and new gifts spread across the islands. Even now, 50 years later, many are still waiting for their return.

    Superhuman AI could become our new cargo cult. Perhaps a hundred years from now, people will perceive our time in the same way: as if we believed in superhuman AI and waited for decades for it to appear any minute and bring us unimaginable gifts.

    However, non-superhuman AI already exists. We continue to look for a new definition for it, to complicate it. However, in a broad sense, the mind that is alien to us is a spectrum of intellectual capabilities, thinking, mechanisms of reasoning, learning, and self-awareness. AI is spreading and will continue to spread. It becomes deeper, more diverse, more powerful.

    Before AI, no invention could completely change the world. By the end of the 21st century, artificial intelligence will be so powerful that it will change everything in our lives.

    Be that as it may, the myth of superhuman AI that will give us super-wealth or super-slavery (or both) will live on. However, it will remain a myth, which is unlikely to be translated into reality.”

    A global apocalypse involving AI (artificial intelligence) is increasingly frightening the world's leading researchers in the field of computer technology. This could be an uprising of robots who, guided by their program and values, will want to destroy biological life forms on the planet. A scenario in which AI-controlled machines become the dominant form of life is possible.

    Why are world-famous scientists such as Bill Gates, Steve Hawking, Elon Musk and others concerned about this problem and publicly declare the real threat of this scenario? Could the events that happened in the Hollywood film "Terminator" be prophetic? It is known that any fictional story is based on real facts.

    Computers are getting better at lying and deception

    Lying is a form of behavior developed in the process of evolution. People, some animals and birds, resort to deception in order to obtain food and benefits, which increases the chances of survival. Recently, these evolutionary methods have been successfully mastered by AI machines.

    Designers from the Georgia Institute of Technology have made progress in creating robots that use lies and deception in their behavior algorithms. Scientists believe that in the future, this technology will find wide application in the military sphere.

    It is assumed that prototypes of robots will be able to be used in real combat conditions. These animate mechanisms will be able to carry out work to protect provisions and ammunition depots. Using deception programs, security guards of the future will be able to change their patrol routes and carry out unpredictable actions. As the developers plan, such methods will help deceive potential enemies of people and other robots.

    According to Professor Arkin, research is associated with a number of ethical problems. Loss of control over technology, if it falls into the hands of ill-wishers, can lead to a serious disaster.

    Machines are taking over the job market

    While the public is busy thinking about a global war involving AI, scientists are describing more pressing and real dangers. Various automatic machines and mechanisms are increasingly used in production, destroying jobs.

    Many analysts and experts agree that active developments in the field of logic programs and automation could lead to many people losing their jobs. In the United States of America, there are more than 250 thousand robots involved in the production of traditional human work.

    In the countries of the East, which produce more than 90% of the world's technology, manual labor still predominates. One can only guess what will happen to people employed in production with the arrival of robots.

    It's not just workers who are expressing concern that machines will displace humans from the labor market. Artificial intelligence expert Andrew Ng from the Google Brain Project and a leading expert at Baidu (the Chinese equivalent of Google) also expressed concern about active developments in the field of AI. According to him, artificial electronic intelligence is capable of doing the same thing as an ordinary person, only faster, better and with less resources.

    The University of Oxford in the UK has conducted research reflecting the dynamics of the impact of AI on manufacturing. According to the data obtained, over the next twenty years, 35% of jobs will be controlled by AI.

    Computer programs are superior to hackers

    From the point of view of an ordinary person, a hacker is like a superman, capable of achieving any result. In reality, these are programmers with a tired expression on their faces and a traditional cup of coffee.

    Hacking is a rather boring and tedious activity that takes a lot of time and resources. Moreover, entire departments of specialists are engaged in the development of programs designed to protect against intrusion. The biggest threat to bad hackers comes from advanced AI systems.

    Artificial intelligence can be used not only for protection, but also for hacking. In the context of a cyber security program, AI is being developed that can find vulnerabilities in enemies, while online protecting its logical devices from external threats and attacks.

    According to scientists, AI-based electronic hackers are being developed for the sake of achieving the common good and prosperity. But if attackers gain access to such programs, sad events can occur. It’s scary to imagine a situation where a powerful AI hacks the security protocols of a nuclear power plant and gains complete control. In this situation, humanity may find itself helpless.

    Programs learn to understand people's motives

    The daily life of many people is inextricably linked with the use of various social networks. One of the most popular networks in the world is Facebook. But few people know that every visit by a person to the pages of a given social network does not go unnoticed by the AI. During his visit to Berlin, Mark Zuckerberg explained what purposes AI serves.

    The program is designed to analyze user behavior on Facebook pages. Based on the collected data, the program makes recommendations about the interests and preferences of the audience. At this stage of development, the autonomous system is capable of recognizing patterns and self-learning in accordance with the program.

    In the future, it is planned to create intelligent AI that can independently improve its skills by rewriting its code. This may lead to an improvement in life, or it may become the latest development of humanity.

    Cyborgs will replace lovers

    Many Hollywood films are based on the idea of ​​love and interspecies relationships between man and machine. From the realm of science fiction, this concept flows into the modern world.

    Futurist Dr. Ian Pearson believes that by 2050, sex between a person and a robot will occur more often than just between people. The scientist actively collaborates with Bondara, the leading chain of sex stores in Britain.

    The doctor's report mentions that by 2025, sex robots with artificial intelligence will be developed. By 2035, sex through virtual reality will become widespread, at the level of modern pornography. And by 2050, sex with robots will become an everyday reality and become the norm.

    Defenders traditional values, have a negative attitude towards sex with intelligent machines. Such connections will set the bar of expectations too high. high level, which will negatively affect attitudes towards women.

    Machines are becoming more and more human-like

    Developers in the field of creating humanoid robots, Japanese Hiroshi Ishiguro and Chinese Sun Yang, created a unique machine. The Yang-Yan cyborg has the appearance of its creator and is named after her daughter. The machine has AI and is capable of performing a number of simple actions.

    This is not the only humanoid example of robotics. The Singapore Technological University also proposed a modern example of a cyborg - Nadine. This prototype also has artificial intelligence and operates in an educational institution.

    Designed in the form of an attractive brunette, with soft skin, the cyborg has a number of built-in functions. The robot can meet and greet its interlocutor by shaking hands. The device can also make eye contact and smile. AI is able to remember guests and maintain simple dialogue based on previous experience.

    Robots gain feelings

    Modern robots are not inferior to humans in intelligence. Thanks to recent developments, it is quite difficult to distinguish a cyborg from a human in appearance. So far, the only trait that robots really don't have is the ability to emote. Scientists around the world are working on this problem. And soon the robots are completely indistinguishable from people.

    Developers from the East Asian branch of Microsoft Application & Services Group have released a unique program. The computer is able to conduct a dialogue with the interlocutor and at the same time experience emotions. The artificial intelligence, named Hiaoche, is capable of imitating the conversational style of a seventeen-year-old girl.

    If there is a lack of data, the system can resort to lies. If she is caught cheating, Hiaoche may express anger or embarrassment. AI may resort to sarcasm, be suspicious and express impatience.

    The unpredictability of the program's responses is reminiscent of communicating with a real person. So far, the technology has not found wide application and is most often used for entertainment. Creator Hiaoche is working on improving logic algorithms, which could lead to a future fundamental breakthrough.

    Cars will soon be in our heads

    In light of recent developments, new opportunities are opening up. It no longer seems fantastic to be able to learn a foreign language in a few minutes by downloading it into the brain. Google CTO Ray Kurzweil believes that by 2030 the technology for implanting nanorobots into the brain will be developed. This will raise human intelligence to a fundamentally new level.

    These micro mechanisms will turn consciousness into a powerful computing center. A person will be able to instantly remember any information, sort and send his own thoughts to the archive. If necessary, it will be possible to download movies, documents and receive email to the head.

    An expert in the field of AI, Kurzweil believes that symbiosis with nanorobots will not only help increase intelligence, but will also be able to cure a number of diseases associated with the activity of the nervous system.

    The threats of using such technology can be compared to a global conspiracy theory. Having acquired mind control of the country's leading politician, the AI ​​will be able to control him like a weak-willed zombie. Also, such a system can be used by intelligence services, which does not bode well.

    How robots become weapons

    With the fall of the Soviet Union, the Cold War did not end. Two new superpowers, China and Russia, have emerged on the world stage. America, in turn, spends tens of billions of dollars on developments in the field of computer security.

    The Pentagon believes that the key to future technological superiority lies in the use of AI. The development of self-learning autonomous systems, as well as parallel and related areas, is being actively developed. Prototypes of remotely controlled military equipment have already been developed; all that remains is to train it to operate independently.

    The use of such devices during war promises great benefits. Soldiers will not have to expose themselves to enemy fire and risk their lives. The enemy's morale, in a war against non-living robots, will also be undermined.

    There are also back side. Any computer equipment is subject to system failures and hacks. In this case, the robots that emerge out of control are capable of turning weapons against their creators and civilians.

    Many leading AI experts do not support the idea of ​​​​creating autonomous combat robots. But, in light of the growing military confrontation in the world and the absence of an international ban on the use of such technologies, many countries are conducting their development in strict secrecy. Soon, more and more reports will appear about the use of robots in hot spots of the planet.

    Robots are learning in the wrong direction

    Anticipating possible failures in programs and in order to prevent the uprising of machines, new concepts are being developed aimed at explaining to the machine the correct and undesirable behavior.

    These developments could potentially make AI more humane. Some scientists see this approach as the key to preventing AI from turning weapons against humanity.

    Unique experiments are being conducted at the Georgia Institute of Technology. The researchers, led by Mark Riedl and Brent Harrison, are telling the computer stories, literally. By this method, they strive to introduce AI to human values ​​by developing its ethics. Scientists use methods used in raising a child to teach the program. AI, like an inexperienced child, does not distinguish between the concepts of good and bad.

    There remains a serious flaw in this approach. The history of mankind contains many examples when circumstances forced entire highly developed countries to start wars and commit terrible atrocities. An example would be fascist Germany, whose soldiers carried out genocide of entire peoples. At the same time, German culture at that time was one of the most developed in the world. What will prevent AI from repeating Hitler's experience?

    The development of artificial intelligence will one day lead to its superiority over human mental abilities. However, won't this become dangerous for humanity? The situation can be studied by more precisely defining the concept of AI, taking natural intelligence as a basis for comparison. Can one person combine intelligence and intelligence at the same time? Or can a smart person not be an intellectual and vice versa?

    Such questions arise in connection with the approaching era of AI, the possible danger of which humanity must know in advance and take timely measures to ensure its safety. First of all, the danger of AI will be associated with its independence and uncontrolled decision-making. At the moment, funds have already been allocated to study this problem. The OpenAI Institute studies the prospects for the development of AI. At the current stage of development of AI systems, the danger of its use may be associated with the following factors:

    • software errors. Any software may be at risk of this type;
    • independent activity of AI that is harmful to humans. The danger from AI may come after the invention of the smart computer. Therefore, it is necessary to determine the degree of intelligence for a computer, which can be acceptable, as well as excessive, posing a danger to humans. These properties must be accurately measured due to the convergence of human and computer mental abilities that is inevitable. Information systems that exist today are human-machines that can operate thanks to the intelligence of the user or a computer expert.

    For example, what danger would come from an intelligent accounting system that could produce incorrect information? Danger may arise when such a system develops elements of personality, for example, self-interest that has nothing in common with humans. A solution to this problem may be a ban on the creation of systems that differ in the possibility of evolution.

    In addition, the danger may be associated with the content of logical errors in the AI. It can be used to solve quite complex problems, the list of which is not immediately known. Therefore, special measures must be taken to confirm the correctness of the decision that will be obtained. Most likely, there will be a need to develop all sorts of ways to control such systems, for example, special software that will automatically check the correctness of the solution and will not require human participation.

    Similar articles