Wednesday, July 17, 2019

Introduction to Computer Theory

CHAPTER 1 BACKGROUND The ordinal vitamin C has been filled with the most incredible shocks and moves the theory of relativity, communistic revolutions, psychoanalysis, nu profit war, television, moon walks, genetic engineering, and so on. As astounding as whatsoever of these is the advent of the electronic figurer and its instruction from a mere calculating device into what seems like a thinking motorcar. The birth of the calculator was non wholly independent of the other rasets of this century.The biography of the ready reckoner is a fascinating story however, it is non the undefended of this course. We argon bear on with the supposition of ready reck iodiners, which meat that we form close to(prenominal) coun barrierand numerical samples that impart show with varying degrees of accuracy separate of computers and types of computers and confus fitted railcars. Our models leave non be utilise to hold forth the practical engineering details of the ha rdwargon of computers, unless if the more(prenominal) than than abstract questions of the frontiers of capability of these robotlike devices.There be separate courses that deal with circuits and switching theory (computer logical system) and with focus pitchs and register arrangements (computer ar-chitecture) and with data structures and algorithmic rules and operating systems and compiler instauration and artificial intelligence and so forth. All of these courses fool a theoretical component, tho they differ from our conduct in two basic ways. First, they deal only with computers that already exist our models, on 3 4 AUTOMATA THEORY the other hand, will encompass both told computers that do exist, will exist, and that cig artte ever be dreamed of.Second, they are interested in how surmount to do things we sh every non be interested in optimality at entirely, only rather we sh either be concerned with the question of possibility-what can and what can non be do ne. We shall project at this from the perspective of what language structures the machines we describe can and cannot admit as input, and what assertable meaning their output may amaze. This interpretation of our intent is extremely world-wide and perhaps a little misleading, but the numerically hairsplitting definition of our study can be silent only by those who already know the ideas introduced in this course.This is often a characteristic of scholarship later on years of study one can fairish begin to define the subject. We are now embarking on a typical example of such a journey. In our last chapter (Chapter 31) we shall finally be able to define a computer. The history of Computer hypothesis is also interesting. It was formed by fortunate coincidences, involving several seemingly unrelated branches of intellectual endeavor. A low-down series of contemporaneous discoveries, by very mingled peck, separately motivated, flowed together to become our subject.Until w e return formal more of a foundation, we can only describe in general terms the different schools of cerebration that have melded into this field. The most obvious component of Computer hypothesis is the theory of numeric logic. As the twentieth century started, mathematics was facing a dilemma. Georg cantor (1845-1918) had recently invented the Theory of Sets (unions, intersections, inclusion, cardinality, etc. ). But at the very(prenominal) time he had discovered both(prenominal) very uncomfortable paradoxes-he created things that looked like contradictions in what seemed to be rigorously constituten mathematical theorems.Some of his unusual arrestings could be tolerated (such as that infinity comes in different sizes), but every(prenominal) could not (such as that some set is bigger than the universal set). This left a deprave over mathematics that needed to be re bringd. David Hilbert (1862-1943) treasured all of mathematics put on the uniform sound footing as Eucl desiren Geometry, which is characterized by precise definitions, explicit axioms, and rigorous proofs. The format of a Euclidean proof is precisely specified. Every stick is either an axiom, a previously proven theorem, or lines from the lines above it by one of a hardly a(prenominal) simple rules of inference.The mathematics that developed in the centuries since Euclid did not follow this standard of precision. Hilbert weighd that if mathematics Xere put covert on the Euclidean standard the Cantor paradoxes would go away. He was very concerned with two would-be(prenominal) projects first, to demonstrate that the new system was free of paradoxes second, to find methods that would guarantee to enable domain to construct proofs of all the legitimate extractments in mathematics. Hilbert wanted something formulaic-a precise office for producing results, like the directions in a cookbook.First draw all these lines, then write all these equations, then calculate for all these points, and so on and so on and the proof is done-some approach that is certain(prenominal) and sure-fire without any reliance BACKGROUND 5 on freakish and undependable brilliant mathematical insight. We simply follow the rules and the answer must come. This type of complete, guaranteed, easy-to-follow set of operating instructions is called an algorithm. He hoped that algorithms or procedures could be developed to solve whole classes of mathematical capers.The collection of techniques called additive algebra provides dear such an algorithm for solving all systems of linear equations. Hilbert wanted to develop algorithms for solving other mathematical problems, perhaps horizontal an algorithm that could solve all mathematical problems of any kind in some finite issuance of steps. Before starting to look for such an algorithm, an exact notion of what is and what is not a mathematical statement had to be developed. After that, thither was the problem of defining exactly what can and what cannot be a step in an algorithm.The says we have used procedure, formula, cookbook method, complete instructions, are not part of mathematics and are no more meaningful than the word algorithm itself. Mathematical logicians, darn trying to follow the suggestions of Hilbert and straighten out the quandary left by Cantor, found that they were able to prove mathematically that some of the desired algorithms cannot exist-not only at this time, but they can never exist in the future, either. Their main I result was even more fantastic than that.Kurt Godel (1906-1978) not only showed that there was no algorithm that could guarantee to provide proofs for all the true up statements in mathematics, but he proved that not all the true statements even have a proof to be found. G6dels Incompleteness Theorem implies that in a specific mathematical system either there are some true statements without any assertable proof or else there are some false statements that can be proven. This earth-shaking result make the mess in the philosophy of mathematics even worse, but very exciting.If not every true statement has a proof, can we at least(prenominal) fulfill Hilberts program by finding a proof-generating algorithm to provide proofs whenever they do exist? Logicians began to withdraw the question Of what fundamental parts are all algorithms composed? The first general definition of an algorithm was proposed by Alonzo Church. Using his definition he and Stephen lucre Kleene and, independently, Emil Post were able to prove that there were problems that no algorithm could solve. While also solving this problem independently, Alan Mathison Turing (1912-1954) developed the concept of a theoretical universal-algorithm machine. analyse what was possible and what was not possible for such a machine to do, he discovered that some tasks that we readiness have expected this abstract omnipotent machine to be able to take form out are impossible, even for it. Turin gs model for a universal-algorithm machine is directly machine-accessible to the invention of the computer. In fact, for completely different reasons (wartime code-breaking) Turing himself had an all important(predicate) part in the construction of the first computer, which he based on his work in abstract logic.On a wildly different front, two researchers in neurophysiology, Warren 6 AUTOMATA THEORY Sturgis McCulloch and Walter Pitts (1923-1969), constructed a mathematical model for the way in which sensory receptor organs in animals behave. The model they constructed for a anxious net was a theoretical machine of the same nature as the one Turing invented, but with certain limitations. Mathematical models of real and abstract machines to a faultk on more and more importance.Along with mathematical models for biological processes, models were introduced to study psychological, economic, and kindly situations. Again, entirely independent of these considerations, the invention of the vacuum metro and the subsequent developments in electronics enabled engineers to build fully reflexive electronic calculators. These developments fulfilled the age-old dream of Blaise pascal (1623-1662), Gottfried Wilhelm von Leibniz (1646-1716), and Charles Babbage (1792-1871), all of whom built mechanical calculating devices as powerful as their respective technologies would allow.In the 1940s, gifted engineers began edifice the first generation of computers the computer Colossus at Bletchley, England (Turings decoder), the ABC machine built by basin Atanosoff in Iowa, the Harvard Mark I built by Howard Aiken, and ENIAC built by thaumaturgy Presper Eckert, Jr. and John William Mauchly (1907-1980) at the University of Pennsylvania. Shortly later on the invention of the vacuum tube, the incredible mathematician John von von Neumann (1903-1957) developed the idea of a stored-program computer.The idea of storing the program inside the computer and allowe the computer to operate on (and modify) the program as good as the data was a awe-inspiring advance. It may have been conceived decades earlier by Babbage and his fellow Ada Augusta, Countess of Lovelace (1815-1853), but their technology was not enough to explore this possibility. The ramifications of this idea, as pursued by von Neumann and Turing were quite profound. The early calculators could perform only one predetermined set of tasks at a time.To make changes in their procedures, the calculators had to be physically rebuilt either by rewiring, resetting, or reconnecting motley parts. Von Neumann permanently wire certain operations into the machine and then designed a central control section that, after reading input data, could select which operation to perform based on a program or algorithm encoded in the input and stored in the computer along with the raw data to be processed. In this way, the inputs determined which operations were to be performed on themselves.Interestingly, cont emporary technology has progressed to the point where the ability to manufacture sacred chips cheaply and easily has do the prospect of reconstruct a computer for each program feasible again. However, by the last chapters of this book we will regard the significance of the difference between these two approaches. Von Neumanns aspiration was to convert the electronic calculator into a reallife model of one of the logicians ideal universal-algorithm machines, such as those Turing had described.Thus we have an unusual situation where the advanced theoretical work on the potential of the machine preceded the demonstration that the machine could really exist. The pack who first discussed BACKGROUND 7 these machines only dreamed they might ever be built. Many were very surprised to find them actually working in their own lifetimes. Along with the concept of programming a computer came the question What is the outflank language in which to write programs?Many languages were invente d, owing their distinction to the differences in the specific machines they were to be used on and to the differences in the types of problems for which they were designed. However, as more languages emerged, it became clear that they had many elements in common. They seemed to share the same possibilities and limitations. This reflexion was at first only intuitive, although Turing had already worked on much the same problem but from a different angle. At the time that a general theory of computer languages was being developed, another surprise occurred.Modem linguists, some influenced by the prevalent trends in mathematical logic and some by the emerging theories of developmental psychology, had been investigating a very similar subject What is language in general? How could primitive gays have developed language? How do people understand it? How do they learn it as children? What ideas can be expressed, and in what ways? How do people construct sentences from the ideas in their minds? Noam Chomsky created the subject of mathematical models for the description of languages to answer these questions.His theory grew to the point where it began to shed luminance on the study of computer languages. The languages humans invented to snuff it with one another and the languages necessary for humans to expire with machines shared many basic properties. Although we do not know exactly how humans understand language, we do know how machines digest what they are told. Thus, the formulations of mathematical logic became useful to linguistics, a previously nonmathematical subject. Metaphorically, we could say that the computer then took on linguistic abilities.It became a word processor, a translator, and an interpreter of simple grammar, as well as a compiler of computer languages. The computer software invented to interpret programming languages was applied to human languages as well. One point that will be made clear in our studies is why computer languages are ea sy for a computer to understand whereas human languages are very difficult. Because of the many influences on its development the subject of this book goes by various names. It includes ternion major fundamental areas the Theory of Automata, the Theory of globe Languages, and the Theory of Turing Machines.This book is divided into three parts corresponding to these topics. Our subject is sometimes called Computation Theory rather than Computer Theory, since the items that are central to it are the types of tasks (algorithms or programs) that can be performed, not the mechanical nature of the physical computer itself. However, the name counting is also misleading, since it popularly connotes arithmetical operations that are only a fraction of what computers can do. The term computation is inaccurate when describing word AUTOMATA THEORY processing, choose and searching and awkward in discussions of program verification. and as the term Number Theory is not limited to a descriptio n of calligraphic displays of number systems but focuses on the question of which equations can be solved in integers, and the term Graph Theory does not include bar graphs, pie charts, and histograms, so too Computer Theory need not be limited to a description of physical machines but can focus on the question of which tasks are possible for which machines.We shall study different types of theoretical machines that are mathematical models for actual physical processes. By considering the possible inputs on which these machines can work, we can analyze their various strengths and weaknesses. We then arrive at what we may believe to be the most powerful machine possible. When we do, we shall be surprised to find tasks that even it cannot perform. This will be-our last result, that no matter what machine we build, there will always be questions that are simple to state that it cannot answer.Along the way, we shall begin to understand the concept of computability, which is the foundat ion of except research in this field. This is our goal. Computer Theory extends boost to such topics as complexity and verification, but these are beyond our intended scope. Even for the topics we do cover-Automata, Languages, Turing Machines-much more is known than we present here. As intriguing and lovable as the field has proven so far, with any luck the most fascinating theorems are to date to be discovered.

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.