18+
Elements applications of artificial intelligence in transport and logistics

Бесплатный фрагмент - Elements applications of artificial intelligence in transport and logistics

Объем: 117 бумажных стр.

Формат: epub, fb2, pdfRead, mobi

Подробнее

The emergence of the science of artificial intelligence

Artificial intelligence (AI) is intelligence displayed by machines, as opposed to natural intelligence displayed by humans and animals. The study of artificial intelligence began in the 1950s, when systems could not perform tasks as well as humans. Artificial intelligence is the overall goal of building a system that exhibits intelligence, consciousness and is capable of self-learning. The most famous types of artificial intelligence known as machine learning, which is a kind of artificial intelligence, and deep learning.

The development of artificial intelligence is a controversial area as scientists and policymakers grapple with the ethical and legal implications of creating systems that exhibit human-level intelligence. Some argue that the best way to promote artificial intelligence is through education to prevent it from bias against people and make it accessible to people from all socioeconomic backgrounds. Others fear that increased regulation and concerns over national security will hamper the development of artificial intelligence.

Artificial intelligence (AI) originated in the 1950s, when scientists believed that machines could not exhibit intelligent behavior that the human brain could not reproduce. In 1962, a team at Carnegie Mellon University led by Terry Winograd began work on universal computing intelligence. In 1963, as part of the MAC project, Carnegie Mellon created a program called Eliza, which became the first machine to demonstrate the ability to reason and make decisions like humans.

In 1964, IBM researcher JCR Licklider began research in the computer science and cognitive sciences with the goal of developing intelligent machines. In 1965, Licklider coined the term «artificial intelligence» to describe the entire spectrum of cognitive technologies that he studied.

Scientist Marvin Minsky introduced the concept of artificial intelligence in the book «Society of Mind» and foresaw that the field of development of science goes through three stages: personal, interactive and practical. Personal AI, which he considered the most promising, would lead to the emergence of human-level intelligence, an intelligent entity capable of realizing its own goals and motives. Interactive AI will develop the ability to interact with the outside world. Practical AI, which he believed was most likely, would develop the ability to perform practical tasks.

The term artificial intelligence began to appear in the late 1960s when scientists began to make strides in this area. Some scientists believed that in future, computers would take on tasks that were too complex for the human brain, thus achieving intelligence. In 1965, scientists were fascinated by an artificial intelligence problem known as the Stanford problem, in which a computer was asked to find the shortest path on a map between two cities in a given time. Despite many successful attempts, the computer was able to complete the task only 63% of the time. In 1966, Harvard professor John McCarthy stated that this problem «is as close as we can in computers to the problem of brain analysis, at least on a theoretical basis».

In 1966, researchers at IBM, Dartmouth College, the University of Wisconsin-Madison, and Carnegie Mellon completed work on the Whirlwind I, the world’s first computer designed specifically for artificial intelligence research. In the Human Genome Project, computers were used to predict the genetic makeup of a person. In 1968, researchers at Moore’s School of Electrical Engineering published an algorithm for artificial neural networks that could potentially be much more powerful than an electronic brain.

In 1969, Stanford graduate students Seymour Papert and Herbert A. Simon created language for children Logo. Logo was one of the first programs to use both numbers and symbols, as well as simple grammar. In 1969 Papert and Simon founded the Center for Interactive Learning, which led to the development of the logo and further research into artificial intelligence.

In the 1970s, a number of scientists began experimenting with self-conscious systems. In 1972, Yale professor George Zbib introduced the concept of «artificial social intelligence» and suggested that these systems might one day understand human emotions, in 1972 he coined the term «emotional intelligence» and suggested that one day systems might understand emotions. In 1973, Zbib co-authored an article entitled «Natural Aspects of Human Emotional Interaction,» in which he argued that artificial intelligence could be combined with emotion recognition technology to create systems capable of understanding emotions. In 1974 Zbib founded Interaction Sciences Corporation to develop and commercialize his research.

By the late 1960s, several groups were working on artificial intelligence. Some of the most successful researchers in this area were from the MIT Artificial Intelligence Lab, founded by Marvin Minsky and Herbert A. Simon. MIT’s success can be attributed to the diversity of individual researchers, their dedication and the group’s success in finding new solutions to important problems. By the late 1960s, most artificial intelligence systems weren’t as powerful as humans.

Minsky and Simon envisioned a universe in which the intelligence of a machine is represented by a program or set of instructions. As the program worked, it led to a series of logical consequences called a «set of affirmative actions.» These consequences can be found in the answer dictionary, which will create a new set of explanations for the child. In this way, the child can make educated guesses about the state of affairs, creating a feedback loop that, in the right situation, can lead to a fair and useful conclusion. However, there were two problems with the system: the child had to be taught according to the program, and the program had to be perfectly detailed. No programmer could remember all the rules a child had to follow, or a set of answers that a child might have.

To solve this problem, Minsky and Simon developed what they called the «magician’s apprentice» (later known as the Minsky rule-based thinking system). Instead of memorizing each rule, the system followed a process: the programmer wrote down the statement and identified the «reasons» for the various outcomes based on the words «explain,» «confirm,» and «deny.» If the explanation matched one of the «reasons,» then the program needed to be tested and given feedback. If this did not happen, it was necessary to develop a new one. If the program was successful in the second phase, it was allowed to create more and more rules, increasing the breadth of its theories. When faced with a problem, he could be asked to read the entire set of rules in order to re-examine the problem.

Minsky and Simon system was incredibly powerful, because the programmer only gave several versions explanation. The researcher was not required to go through any procedures other than writing and entering the program requirements. This allowed Minsky and Simon to create more rules and, more important, learn from their mistakes. In 1979, the system was successfully demonstrated in the SAT exam. Although the system had two flaws that prevented it from answering two of the three SAT questions, it scored 82 percent for Group 2 and 3 questions and 75 percent for Group 4 and 5 questions. The system did not cope with complex issues that did not fit into the established rules. Processing large amounts of data was also slow, so any additional details were thrown away to speed up the system.

The system also had some limitations due to the rules. Rules can only be defined based on a limited number of labels. For example, when rules are given, they should define what the labels mean. They can only be applied to positive results. However, as the system’s ability to process information has grown, it has been shown that the system can make mistakes. In particular, if it had to apply the same label to two different objects (and still detect an error), it could not make a useful distinction between the two objects, and then decide which label should be applied.

Minsky and Simon focused on applying their system to humans. They developed a system they called a «living program» or «projective computing system» (PPAS). They used PPAS to create a symbolic approach to the study of psychology. This would have an advantage that, unlike traditional programs, the teaching could be programmed. The program had to use symbols to describe the human system, and then train the system through explanations. They later called this approach «general computing», which allows you to study any problem with enough time and data.

For Minsky and Simon, the main limitation of their system was its ability to accurately calculate the results of the system. This limitation was not related to any flaws in their system; the system worked, but was slow and expensive. For this reason, they thought they could get around this by programming the results using so-called «functional programming» (FP). FP was about represented a British computer scientist the JCR Licklider in 1950. He is about refers to the programming style, which focuses on the main functions and behavior of programs, rather than the implementation of the program. Using FP, the system could compute the results, but then explain the cause of the problem using human language.

Over the next decade, PPS and PPAS continued to grow, and in 1966, Minsky and Simon published an article titled Brain Activity Systems, which was the result of their research. Here they showed that there was a program that could be written that would read the brains of a number of volunteers and then track their brain activity. Each volunteer read a passage about how the brain works; they had to complete this task, and then measured the activity of the brain.

Specifically, the authors showed that their system is capable of responding to certain brain waves (also called rhythms) and that it can combine these brain waves in ways that help make sense of a subject. They showed that if the brainwave was a slow rhythm, the system was able to «remember» information it was exposed to earlier and «reactivate» it when needed. If the brainwave was a fast rhythm, the system could «cure» the forgetfulness of information by comparing it with another element.

When Minsky and Simon published their article, they have attracted a lot of attention, because they offered this kind of experimental system, which theoretically can be implemented. They were able to approach the study from the practical point of view.

In 1972, Minsky and Simon founded the Center for Behavioral Neuroscience in Ann Arbor, Michigan. They designed and conducted a series of experiments that led them to the following conclusions: «There was something different from the mind, something that distinguished it from any other organization»; «The data showed that our ideas about action and the brain were different»; «Our brain worked differently than other parts of the body»; «There is a possibility that the organization of the mind may be influenced by the activity of the brain»; «The minds are based on basic physical principles.»

They came to this conclusion because they saw the relationship between specific brain activity and a specific behavior or idea. In other words, if you go to the mind and see activity that looked like it came from the mind, and you saw behavior that looked like it came from the mind, then the behavior is likely to follow the behavior. And if the mind was «imprinted» on the behavior, then it had to follow the action, and not vice versa. They began to formulate a new theory about how behavior arises and how mind is formed.

Minsky explained:

«The starting point was the work we did on the correlations between brain activity and human behavior. It was very clear to us that these correlations cannot be understood without first understanding how behavior is generated.»

The authors came to the conclusion that any inorganic system can act only on the basis of its internal states. If the internal states changed, then the behavior of the system would change. When the authors thought of a brain that responds to certain types of brain waves, they noticed that the brain would produce a certain behavior, and that this behavior would correspond to the internal state of the brain. This is a universal principle of nature. Since this principle of nature made behavior universal, it should lead the authors to the conclusion that if they applied these principles to the brain, they could create a computer program that would be able to reproduce the behavior of the brain.

Minsky believed that universal principles governing biological systems could be used to create computer software. However, Minsky admitted that his ideas were «science fiction.» It took Minsky and Simon another year to find a way to create a computer that could mimic their discoveries. But by 1972, they had developed a computer program that could test their theories.

John B. Barg, professor of psychology at Yale University, was also instrumental in the development of Minsky and Simon’s research. Barg helped found the Center for Behavioral Neuroscience at the University of Michigan in 1972, where Minsky and Simon continued to experiment with human and animal behavior.

The field of artificial intelligence research began in a seminar at Dartmouth College in 1956, where the term «artificial intelligence» was first coined. The following year, in 1957, Massachusetts Institute of Technology, together with its research graduate students, formed a new organization of AI researchers called the SIGINT-A (Intelligence and Scientific Computing) Committee. After creating many of the foundations of artificial intelligence, members of this group did some research on a similar program at Stanford University. The group decided to keep the name SIGINT-A and develop a new research and development program in the field of artificial intelligence. SIGINT-A became the research and development group that eventually became the world famous artificial intelligence laboratory that now bears his name. SIGINT-A is a legendary research organization. There are many famous names in this area in its history. Many famous names in the field of AI have been taken from SIGINT-A. Many projects have been implemented in the laboratory. To meet engineering needs or to fulfill a new mission in a new era of artificial intelligence, SIGINT-A has never been afraid to try new things. And many of her ideas and directions have been accepted in the generally accepted field. Many of what we now regard as leading AI tools, such as neural networks and helper vector machines, were created or adapted in the SIGINT-A era.

Computer science defines AI research as the study of «intelligent agents»: any device that perceives the environment and takes action based on what it perceives.

It is a common misconception that artificial intelligence research focuses on creating technologies that resemble human intelligence. However, as Alan Turing wrote, the most important attributes of human intelligence are not the pursuit of mathematical knowledge and the ability to reason, but the ability to learn from experience, perceive the environment, and so on. To understand how these properties of human intelligence can be used to improve other technologies, one must understand these characteristics of human intelligence.

AI researchers and entrepreneurs use the term «artificial intelligence» to define software and algorithms that demonstrate human intelligence. The academic area has since expanded to cover related topics such as natural language processing and systems. Much of the work in this area takes place in universities, research institutes and companies, with investments from companies like Microsoft and Google.

Artificial intelligence is also used in other industries, such as the automatic control of ships, and is commonly used in the development of robotics. Examples of AI applications include speech recognition, image recognition, language processing, computer vision, decision making, robotics, and commercial products including language translation and recommendation engines. Artificial intelligence is at the center of national and international public policy such as the National Science Foundation. Research and development in artificial intelligence is managed by independent organizations that receive grants from public and private agencies. Other organizations, such as The Institute for the Future, have a wealth of information on AI and other emerging technologies and design professions, as well as the talent required to work with those technologies.

The definition of artificial intelligence has evolved since the concept was developed and it is currently not a black and white definition, but rather a continuum. From the 1950s to the 1970s, AI research focused on the automation of mechanical functions. Researchers such as John McCarthy and Marvin Minsky have explored the problems of general computing, general artificial intelligence, reasoning, and memory.

In 1973, Christopher Chabris and Daniel Simons proposed a thought experiment called The Incompatibility of AI and Human Intelligence. The problem described was that if the artificial system was so smart that it was superior to humans or superior to human capabilities, the system could make whatever decisions it wanted. This can violate the fundamental human assumption that people should have the right to make their own choices.

In the late 1970s and early 1980s, the field of activity changed from the classical orientation towards computers to the creation of artificial neural networks. Researchers began to look for ways to teach computers to learn rather than just perform certain tasks. This field developed rapidly during the 1970s and eventually moved from computing to a more scientific-oriented one, and its field of application expanded from computing to human perception and action.

Many researchers in the 1970s and 1980s focused on defining the boundaries of human and computer intelligence, or the capabilities required for artificial intelligence. The boundary should be wide enough to cover the full range of human capabilities.

While the human brain is capable of processing gigabytes of data, it was difficult for leading researchers to imagine how an artificial brain could process much larger amounts of data. At the time, the computer was a primitive device and could only process single-digit percentages of data on a human scale.

During that era, artificial intelligence scientists also began work on algorithms to teach computers to learn from their own experience — a concept similar to how the human brain learns. Meanwhile, in parallel, a large number of computer scientists developed search methods that could solve complex problems by looking for a huge number of possible solutions.

Artificial intelligence research today continues to focus on automating specific tasks. This emphasis on the automation of cognitive tasks is called «narrow AI». Many researchers working in this field are working on facial recognition, language translation, playing chess, composing music, driving cars, playing computer games, and analyzing medical images. Over the next decade, narrow AI is expected to develop more specialized and advanced applications, including a computer system that can detect early stages of Alzheimer’s disease and analyze cancers.

The public uses and interacts with artificial intelligence every day, but the value of AI in education and business is often overlooked. AI has significant potential in almost all industries, such as pharmaceuticals, manufacturing, medicine, architecture, law and finance.

Companies are already using artificial intelligence to improve services, improve product quality, lower costs, improve customer service, and save money on data centers. For example, with robotics software, Southwest Airlines and Amadeus can better answer customer questions and use customer-generated reports to improve their productivity. Overall, AI will affect nearly every industry in the coming decades. On average, about 90% of U.S. jobs will be affected by AI by 2030, but the exact percentage varies by industry.

Artificial intelligence can dramatically improve many aspects of our lives. There is a lot of potential for improving health and treating illness and injury, restoring the environment, personal safety, and more. This potential has generated a lot of discussion and debate about its impact on humanity. AI has been shown to be far superior to humans in a variety of tasks such as machine vision, speech recognition, machine learning, language translation, computer vision, natural language processing, pattern recognition, cryptography, chess.

Many of the fundamental technologies developed in the 1960s were largely abandoned by the late 1990s, leaving gaps in this area. Fundamental technologies that define AI today, such as neural networks, data structures, and so on. Many modern artificial intelligence technologies are based on these ideas and are much more powerful than their predecessors. Due to the slow pace of change in the tech industry, while current advances have produced some interesting and impressive results, there is little to distinguish them from each other.

Early research in artificial intelligence focused on learning machines that used a knowledge base to change their behavior. In 1970, Marvin Minsky published a concept paper on LISP machines. In 1973, Turing proposed a similar language called ML, which, unlike LISP, recognized a subset of finite and formal sets for inclusion.

In the decades that followed, researchers were able to refine the concepts of natural language processing and knowledge representation. This advance has led to the development of the ubiquitous natural language processing and machine translation technologies in use today.

In 1978, Andrew Ng and Andrew Hsey wrote an influential review article in the journal Nature containing over 2,000 papers on AI and robotic systems. The paper covered many aspects of this area such as modeling, reinforcement learning, decision trees, and social media.

Since then, it has become increasingly difficult to involve researchers in natural language processing, and new advances in robotics and digital sensing have surpassed the state of the art in natural language processing.

In the early 2000s, a lot of attention was paid to the introduction of machine learning. Learning algorithms are mathematical systems that learn by observation.

In the 1960s, Bendixon and Ruelle began to apply the concepts of learning machines to education and beyond. Their innovations inspired researchers to further explore this area, and many research papers were published in this area in the 1990s.

Sumit Chintal’s 2002 article, Learning with Fake Data, discusses a feedback system in which artificial intelligence learns by experimenting with the data it receives as input.

In 2006, Judofsky, Stein, and Tucker published an article on deep learning that proposed a scalable deep neural network architecture.

In 2007, Rohit described" hyperparameters». The term "hyperparameter" is used to describe a mathematical formula that is used in computer learning. While it is possible to design systems with tens, hundreds, or thousands of hyperparameters, the number of parameters must be carefully controlled because overloading the system with too many hyperparameters can degrade performance.

Google co-founders Larry Page and Sergey Brin published an article on the future of robotics in 2006. This document includes a section on developing intelligent systems using deep neural networks. Page also noted that this area would not be practical without a wide range of underlying technologies.

In 2008, Max Jaderberg and Shai Halevi published «Deep Speech». In it was presented the technology «Deep Speech», which allowed the system to determine the phonemes of spoken language. The system entered four sentences and was able to output sentences that were almost grammatically correct, but had the wrong pronunciation of several consonants. Deep Speech was one of the first programs to learn to speak and had a great impact on research in the field of natural language processing.

In 2010, Jeffrey Hinton describes the relationship between human-centered design and the field of natural language processing. The book was widely cited because it introduced the field of human-centered AI research.

Around the same time, Clifford Nass and Herbert A. Simon emphasized the importance of human-centered design in building artificial intelligence systems and laid out a number of design principles.

In 2014, Hinton and Thomas Kluver describe neural networks and use them to build a system that can transcribe a person with a cleft lip. The transcription system has shown significant improvements in speech recognition accuracy.

In 2015, Neil Jacobstein and Arun Ross describe the TensorFlow framework, which is now one of the most popular data-driven machine learning frameworks.

In 2017, Fei-Fei Li highlights the importance of deep learning in data science and describes some of the research that has been done in this area.

Artificial neural networks and genetic algorithms

Artificial neural networks (ANNs), commonly referred to simply as deep learning algorithms, represent a paradigm shift in artificial intelligence. They have the ability to explore concepts and relationships without any predefined parameters. ANNs are also capable of studying unstructured information that goes beyond the requirements of established rules. Initial ANN models were built in the 1960s, but research has intensified in the last decade.

The rise in computing power opened up a new world of computing through the development of convolutional neural networks (CNNs) in the early 1970s. In the early 1980s, Stanislav Ulam developed the symbolic distance function, which became the basis for future network learning algorithms.

By the late 1970s, several CNNs were deployed on ImageNet. In the early 2000s, floating point GPUs provided exponential performance and low power consumption for data processing. The emergence of deep learning algorithms is a consequence of the application of more general computational architectures and new methods for training neural networks.

With the latest advances in multi-core and GPU technology, training neural networks with multiple GPUs is possible at a fraction of the cost of conventional training. One of the most popular examples is GPU deep learning. Training deep neural networks on GPUs is fast, scalable, and requires low-level programming capabilities to implement modern deep learning architectures.

Optimization of genetic algorithms can be an effective method for finding promising solutions to computer science problems.

Genetic algorithm techniques are usually implemented in a simulation environment, and many common optimization problems can be solved using standard library software such as PowerMorph or Q-Learning.

Traditional software applications based on genetic algorithms require a trained expert to program and customize their agent. To enable automatic scripting, genetic algorithm software can be distributed as executable source code, which can then be compiled by ordinary users.

Genetic algorithms are optimized for known solutions that can be of any type (e.g. integer search, matrix factorization, partitioning, etc.). In contrast, Monte Carlo optimization requires that an optimal solution can be generated by an unknown method. The advantage of genetic algorithms over other optimization methods lies in their automatic control over the number of generations required, initial parameters, evaluation function, and reward for accurate predictions.

An important property of a genetic algorithm is its ability to create a «wild» configuration of parameters (for example, alternating hot and cold endpoints) that correspond to a given learning rate (learning rate times the number of generations). This property allows the user to analyze and decide if the equilibrium configuration is unstable.

The downside of genetic algorithms is their dependence on distributed memory management. While extensive optimization techniques can be used to handle large input sets and multiple processor / core configurations, the complexity of this operation can make genetic algorithm decisions vulnerable to resource constraints that impede progress. Even with the genetic algorithm code, in theory, programs based on genetic algorithms can only find solutions to problems when run on the appropriate computer architecture. Examples of problems associated with a genetic algorithm running on a more limited architecture include memory limits for storing representations of the genetic algorithm, memory limits imposed by the underlying operating system or instruction set, and memory limits imposed by the programmer, such as limits on the amount of processing power, allocated for the genetic algorithm and / or memory requirements.

Many optimization algorithms have been developed that allow genetic algorithms to run efficiently on limited hardware or on a conventional computer, but implementations of genetic algorithms based on these algorithms have been limited due to their high requirements for specialized hardware.

Heterogeneous hardware is capable of delivering genetic algorithms with the speed and flexibility of a conventional computer, while using less energy and computer time. Most implementations of genetic algorithms are based on a genetic architecture approach.

Genetic algorithms can be seen as an example of discrete optimization and computational complexity theory. They provide a short explanation of evolutionary algorithms. Unlike search algorithms, genetic algorithms allow you to control changes in parameters that affect the performance of a solution. For this, the genetic algorithm can study a set of algorithms for finding the optimal solution. When an algorithm converges to an optimal solution, it can choose an algorithm that is faster or more accurate.

In the mathematical language of programmatic analysis, a genetic algorithm is a function that maps states into transitions to the next states. A state can be a single location in a shared space or a collection of states. «Generation» is the number of states and transitions between them that must be performed to achieve the target state. The genetic algorithm uses the transition probability to find the optimal solution, and uses a small number of new mutations each time a generation ends. Thus, most mutations are random (or quasi-random) and therefore can be ignored by the genetic algorithm to test behavior or make decisions. However, if the algorithm can be used to solve the optimization problem, then this fact can be used to implement the mutation step.

Transition probabilities determine the parameters of the algorithm and are critical for determining a stable solution. As a simple example, if there was an unstable solution, but only certain states could be traversed, then the algorithm for finding a solution could run into problems, since the mutation mechanism would contribute to a change in the direction of movement of the algorithm. In other words, the problem of transition from one stable state to another will be solved by changing the current state.

Another example might be that there are two states, «cold» and «hot», and that it takes a certain amount of time to transition between these two states. To transition from one state to another in a certain amount of time, the algorithm can use the mutation function to switch between cold and hot states. Thus, mutations optimize the available space.

Genetic algorithms do not require complex computational resources or detailed network architecture management. For example, a genetic algorithm could be adapted to use a conventional computer if computing resources (memory and processing power) were limited, for example, for simplicity in some scenarios. However, when genetic algorithms are constrained by resource constraints, they can only calculate probabilities, which leads to poor results and unpredictable behavior.

Hybrid genetic algorithms combine a sequential genetic algorithm with a dynamic genetic algorithm in a random or probabilistic manner. Hybrid genetic algorithms improve the efficiency of the two methods by combining their advantages while retaining important aspects of both methods. They do not require a deep understanding of both mechanisms, and in some cases do not even require special knowledge in the field of genetic algorithms. There are many common genetic algorithms that have been implemented for different types of problems. Some notable use cases for these algorithms include extracting geotagged photos from social media, traffic prediction, image recognition in search engines, genetic matching between stem cell donors and recipients, and public service evaluations.

A probabilistic mutation is a mutation in which the probability that a new state will be observed in the current generation is unknown. Such mutations are closely related to genetic algorithms and error-prone mutations. Probability mutation is a useful method for checking that a system meets certain criteria. For example, a workflow has a certain error threshold that is determined by the context of the operation. In this case, the choice of a new sequence depends on the probability of getting an error.

Although probabilistic mutations are more complex than deterministic mutations, they are faster because there is no risk of failure. The probabilistic mutation algorithm, unlike deterministic mutations, can represent situations where the observed mutation probability is unknown. However, in contrast to the probabilistic mutation algorithm, parameters must be specified in a real genetic algorithm.

In practice, probabilistic mutations can be useful if the observed probabilities of each mutation are unknown. The difficulty of performing probabilistic mutations increases as more mutations are generated and the higher the probability of each mutation. Because of this, probabilistic mutations have the advantage of being more useful in situations where mutations occur frequently, and not just in one-off situations. Since probabilistic mutations tend to proceed very slowly and have a high probability of failure, probabilistic mutations can only be useful for systems that can undergo very high mutation rates.

There are also many hybrid mutation / genetic algorithms that are capable of generating deterministic or probabilistic mutations. Several variants of genetic algorithms have been used to create music for composers using the genetic algorithm.

Inspired by a common technique, Harald Helfgott and Alberto O. Dinei developed an algorithm called MUSICA that generates music from the sequences of the first, second, and third bytes of a song. Their algorithm generated music from a six-part extended chord composition. Their algorithm produced a sequence of byte values for each element of the extended chord, and the initial value could be either the first byte or the second byte.

In April 2012, researchers at Harvard University published the Efficient Design of a Quality Assured Musical Genome, which described an approach using a genetic algorithm to create musical works.

Computer scientist Martin Wattenberg has proposed a proof of concept for an instrument based on a genetic algorithm capable of not only creating musical performances, but also composing them. His instrument, instead of randomly changing the elements of the performance, would keep certain similar elements constant. It performed both a «traditional» musical play and a «harmonizing» function. Wattenberg’s instrument would be more accurate, and one could compose the same piece using many different generative algorithms, each with different effects. The technology that makes the instruments would be available to musicians, allowing them to insert a musical phrase into the instrument and make it play a complete performance version.

Similar to modern electronic music, instruments that generate music can also be used to control light, sound, video, or displays.

In 1993, two scientists at the University of Minnesota developed a software package called Choir Designer to help researchers design scores for electronic musical instruments. With this package, the user creates fully detailed design plans for possible electronic musical instruments. The software allowed the user to enter a set of musical parameters into a folder-style document called a design template, and then use the music program to create complete, detailed, three-dimensional designs for the instrument and its parts. The data for the design templates was generated by Choir Designer software in a biological manner using genetic algorithms. One template could contain data from Propellerheads Reason music production software, Audacity digital sound editor, as well as regular computer data. In one template, for example, the SPL parameter could be changed to create a second, different sound. Today, no electronic instrument has been created using a design template, although in theory they could be.

Genetic programming

In artificial intelligence, genetic programming (GP) is a method of developing programs by modifying them with DNA and modifying them with various proteins and molecules. GP was developed by John L. Hennessy at Carnegie Mellon University in 1989 and released as open-source software in 1995. The most popular implementation is CUDA, created by Andrew Karp and Ben Shaw from the Massachusetts Institute of Technology.

According to Hennessy, genetic programming is an evolving programming language with a strong focus on optimization, which is the core essence of evolutionary algorithms. It is a program like all programming languages, except that it only includes basic lexical and syntactic predicates. Moreover, it is a programming language that the human brain uses to develop programs.

While genetic programming can be thought of as a pattern matching technique in which a system performs exactly the same task using only the mechanisms it has developed, it is much more general in nature. In evolutionary programming, the exact shape of an adaptive program is not important. You can only target the behavior of the system.

Genetic programming adds limitations that guide evolution in the form of gene sequences (alphabetical or hierarchical). During evolution, the goal is to replicate DNA at a high rate (or as fast as possible) in order to produce the desired proteins or nucleotides and to adapt the DNA to the current needs of the body.

The genetic programming system is derived from genetic programming with random variables. GP is a formalization of an evolutionary process that takes the form of a program as input and produces an executable function at its output. GP is also an evolving language, so it can be seen as a procedural language as well as a programming language. When designing programming languages, GP can be viewed as a statically typed language rather than dynamically typed as in a general-purpose language.

An important feature of genetic programming is its evolutionary nature, which means the absence of a static type system. The program is created «at the moment» when the input data of the program are entered into the system and evolution occurs. This makes genetic programming extremely efficient because it is fast and efficient for the precise evolutionary process that occurs during evolution.

For example, a genetic program can be written in a declarative programming language such as Simula, and evolution can occur as a side effect, allowing the program to keep running while the evolutionary process takes place. If a genetic program uses a dynamic type, the evolutionary phase of the program must stop.

Genetic programming was developed by John L. Hennessy in the early 1980s. Early versions of the program required programmers to use carefully selected input examples for programming.

18+

Книга предназначена
для читателей старше 18 лет

Бесплатный фрагмент закончился.

Купите книгу, чтобы продолжить чтение.