% Systematic programming

Systematic programming : an introduction

My friend FP gave me the best birthday present in ages: "Systematic programming : an introduction" by Professor Wirth. I'm reading the book, which dates back to 1973, when computers were easily recognisable: the computer was the thing in the closet with the blinking lights. The book is about programming. And it should be on the list of all programmers, no matter what language they use.
Don't pay attention to the section on magnetic core memory. Or the "finiteness of memory"; Nobody could have expected that 35 years later all single PC's would really HAVE unlimited amounts of RAM. Let's face it: 4 Gigabyte of RAM is infinite. Just to type a shopping list.

Below are as many chapters of the book as I cared to retype. Scanning is slower than retyping.

Chapter 1 : Introduction

During the last decade, the computer has become an indispensible tool of business, industry and scientific research in performing tasks whose treatment would be impossible without it. The computer is an automaton that executes computational processes according to precisely specified rules. It usually possesses only a limited repertoire of elementary instructions that it "understands" and is capable of obeying, but these instructions are executed with tremendous expedience and reliability.
The essence of the computer's power and wide applicability lies in its ability to execute extremely long sequences of instructions containing an almost infinite combination of elementary actions. The act of incorporating such instruction sequences into "recipes" representing certain classes of computational processes is called 'programming'. But the fundamental ideas behind designing programs can be explained and understood without any reference to the computer.

Programming is a discipline with many applications - one that is open to systematic methods of mathematical analysis involving plenty of nontrivial problems, and one that is above all an intellectual challenge. But the reason programming as a methodical technique has been little analyzed is because it leads to interesting applications and challeging problems, which require a solid foundation and a systematic approach, only when the programs reach a cedrtain complexity and length (i.e. when they are composed of thousands or even millions of instructions).
Before the advent of the computer, there was no "slave" willing or capable of reliably executing such long sequences of commands with absolute, thoughtless, obedience; so the incentive for devising such programs was absent. Only the modern digital computer has made programming both challenging and relevant.

Chapter 2 : Fundamental notions

This chapter will introduce some of the iomportant basic notions of programming. Because they are fundamental, these concepts cannot formally be defined in terms of other concepts. Instead they will be explained through the use of examples.
The most important notion is that of 'action'. In this context, an 'action' is understood to have a finite duration and to have an intended and welldefined effect. Each action requires the existence of some object on which the action is executed and on whose changes of state its effect can be recognized. Each action must also be describable in terms of a language or a system of formulas; its description is called a statement.
If the action can be decomposed into parts, then it is called a process or a computation. If these parts follow eachother strictly in time and no two are executed simultaneously, then the process is called sequential. Consequently, a statement that describes a process can be broken up into parts; it is then called a program. A program thus consists of a set of statements whose textual ordering is not, in general, with the ordering in time of the corresponding actions.

The driving force that actually executes the actions according to the specified statements is called a processor. This rather neutral word gives no clue as to whether the agent is a human being or an automaton. Indeed, programs have meaning without reference to a specific processor, so long as the underlying language is precisely defined. In general, the programmer is not interested in the identity of the processor. He need only be assured that it understands the language of its programs, for the programs are supposed to constitute the rules of behavior of the processor. The programmer therefore needs to know the kinds of statements that his available processor is capable of understanding and executing, and he must adapt his language accordingly.
Every action requires a cdertaion amount of work, depending on the processor. This amount can be expressed as the time that it takes the processor to execute the action. This time span, in turn, can usually be more or less directly translated into a measure of cost. An experienced programmer will always take into account the capabilities of the processors available to him and will then select the solution with the least ensuing cost.

Since this text is primarily concerned with the design of programs to be executed by automatic processors (computers), the remaineder of this chapter will outline some basic characteristics common to all digital computers. preceding this bird's-eye view of computers, however, we would like to introduce two simple examples to illustrate the notions just defined.

example 1: multiplication

We are given the statement:
      
	multiply the two natural numbers x and y and denote their product by z
   
If the available processor understands this statement, that is, if it knows what is meant by "natural number" and by "multiplication", then further elaboration is unnecessary.

For the sake of argument, however, we will assume that the available processor

  1. does not understand sentences in natural language, accepting only certain formulas
  2. cannot multiply but only add
First of all, we notice that the objects of computation are natural numbers. The program, however, is not supposed to specify these numbers; rather it should specify a general pattern of behaviour for processes multiplying arbitrary pairs of natural numbers. In place of numbers, therefore, we simply use general names denoting variable objects, called 'variables'. At the beginning of each process, specific values must be assigned to these variables. This assignment is the most fundamental action in the computational processes executed by computers.

A variable is comparable to a blackboard: its value can be inspected ("read") as many times as desired, and it can be erased and overwritten. Overwriting, however, causes the previous value to be lost. Assignment of a value 'w' to a variable 'v' will subsequently be denoted by

	v := w						(2.1)
   
The symbol ':=' is called the 'assignment operator'.

Formally, statement 2.1 can now be written as

	z := x * y					(2.2)
   
If this statement is decomposed into a sequence of additions following eachother in time, then the action of multiplication becomes a sequential process and statement 2.3 assumes the form of a program. For the moment, this program will be formulated informally as:

step 1
       z := 0
       u := x
      
step 2
       repeat the statements
          z := z + y
	  u := u - 1
       until  u = 0
      

The process that is evoked by this program when specific values are given for x and y can be visualized by recording the values assigned to the variables 'u' and 'z' as the computation progresses in time. With 'x = 5' and 'y = 13' we obtain the table in (2.4).

Values of variables
Step z u
1 0 5
2 13 4
2 26 3
2 39 2
2 52 1
2 65 0

The process terminates, according to statement 2, as soon as 'u = 0'. At this time, 'z' has aquired the final result (65 = 5 * 13). Such a table is called a 'trace'. Note that the sequential listing of values does not mean that these values are retained; instead, each variable has at any one time exactly one single value. This is due to the fact that an assignment overwrites the previous value of the variable.

The objects of this computation are numbers. To perform operations on specific numbers, it is necessary to represent these numbers by a specific notation. A 'choice of notation' is therefore unavoidable in executing a computation. The program, however, is generally valid without regard to specific notation or representation of its objects. It is also essential to distinguish between objects -even if they are abstract objects such as numbers- and their representation. In computers, for instance, numbers are usually represented by the states of magnetic storage elements, but it is highly desirable to be able to formulate processes dealing with numbers that can be obeyed by computers without reference to such nagnetic states.

To illustrate these ideas and to demonstrate how the same computational process can be described by various notations, the table in (2.5) simply replaces the values in (2.4) by roman numerals:

Values of variables
Step z u
1 0 V
2 XIII IV
2 XXVI III
2 XXXIX II
2 LII I
2 LXV 0

example 2 : division

We are given the instruction: 'divide the natural number x by the natural number y and denote the integer quotient as q and the remainder as r'.

More specifically, the following relations must hold:

      
	x = q * y + r		AND		0 <= r < y		(2.6.1)
   
Introducing the divison operator 'div' the computation can be described by the following formal assignment statement:
      
	(q, r) := x div y						(2.6.2)
   
To demonstrate again the decomposition of this statement into a program, we assume that the program is to be specified for a processor incapable of division, that is, without the operator 'div'. Consequently, division has to be decomposed into a sequence of subtractions of the divisor 'y' from the dividend 'x' and the number of possible subtractions becomes the desired quotient 'q'.

step 1
       q := 0
       r := x
      
step 2
       while r >= y repeat
          q := q + 1
	  r := r - y
      

'x' and 'y' again denote 'constants' that represent given fixed values at the outset; 'q' and 'r' denote 'variables' with integral values. The process described by program 2.7 having the values 'x = 100' and 'y = 15', is listed in the trace:

Values of variables
Step q r
1 0 100
2 1 85
2 2 70
2 3 55
2 4 40
2 5 25
2 6 10

the process is terminated as soon as r < y. the results are 'q = 6' and 'r = 10' thus satisfying the relations

      
	100 = 6 * 15 + 10		AND		0 <= 10 < 15		(2.9)
   
These two examples are descriptions of sequential processes in which the individual assingments are performed strictly in sequential order in time. Henceforth, our discussions will be restricted to sequential processes, where the word 'process' will always be understood to be an abbreviation of 'sequential process'. This deliberate omission of nonsequential processes is made not only because conventional computers operate sequentially but mainly because the design of nonsequential programs -or systems of sequential but interdependant programs to be executed concurrently- is a subtle and difficult task requiring -as a basis- a thorough mastery of the art of designing sequential algorithms.

These two examples also show that every program describes a sequence of state transformations of the set of its variables. If the same program is obeyed twice with different initial values, then it would be a mistake to say that the two processes or computations were the same. however, they definitely follow the same pattern of behaviour.
The description of such a pattern of behaviour without reference to a particular processor is usually called an 'algorithm'. The term 'program' is properly used for algorithms designed so that they can be obeyed or followed by a specific processor type. The difference between a general -sometimes called abstract- algorithm and a computer program lies mainly in the fact that the latter must specify the rules of behaviour in every little detail and must be composed according to strict notational rules.
The reasons for this are the machine's limited set of instructions, which it is capable of understanding and executing, and its absolute obedience, based on its total lack of a critical attitude. These characteristics of the computer are criticized by most novices in the art of programming as the reasons behind the need for pedantic precision and attention to detail when dealing with computers. Indeed, even a trivial mistake in writing may lead to totally unintended and "meaningless" machine behaviour. This obvious absence of any "common sense" to which a programmer may appeal (whenever his own senses fail) has been criticized by professionals as well, and efforts have been undertaken to remedy this seeming deficiency.
The experienced programmer, however, learns to appreciate this servile attitude of the computer due to which it becomes possible to even require "unusual" patterns of behaviour. For this is precisely what is impossible to ask when dealing with (human) processors who are accustomed to rounding off every instruction to the nearest interpretation that is both plausible and pleasing to them.

Chapter 3 : the structure of computers

To design programs executable by automatic computer, the programmer must first know his tool. The more precisely he knows his processor, the better he is able to tailor his algorithms into programs utilizing the particular capabilities of that processor. On the other hand, the more an algorithm is tailored and tuned to a processor, the larger the effort spent to develop the program. under normal circumstances, a solution must be found that keeps the program development effort within reasonable limits while still yielding sufficiently good (i.e. efficient) programs.
To find such a solution, the programmer must know the kinds of adaptations that are fairly easy to perform but that, at some time, yield a relatively large improvement. To this end, it is essential to know the most impoirtant, generally valid characteristics of computers while ignoring the idiosyncracies and peculiarities (called 'features') of individual machines.

In all modern digital computers, we can distinguish between two main compoinents:

At each moment, the processor contains only the data to be processed immediately, that is: very few operands. its own storage elements are called registers. All data that are not immediately needed are handed over to the store, which then plays the role of an "ice box".

example : evaluation of an expression

In order to evaluate an arithmatic expression with several operands and intermediate results, we apply again the technique of decomposing a complicated task into a sequence of simpler tasks. This causes individual arithmatic operations to take operands from the processor's registers and to replace them by the results. The evaluation of the expression
      
		a * b + c * d
   
is broken down into simpler instructions: R1 and R2 denote the processor's registers and 'z' designates the the intermediate result, temporarily deposited in the store. The final result is made available in register R1.

The evaluation of the expression has thus been transformed into a short program consisting of three kinds of instructions or statements;

The method of decomposing statements into more elementary steps and then temporarily saving intermediate results in the store in the reason that the same computational process can be executed by relatively simple as well as very sophisticated computers -- the former merely require more time. The decomposition method is the very essence of digital computer programming and is the basis for the application of relatively simple mechanisms to problems of enormous complexity. A precondition for the success of a computation consisting of billions of single steps (whose operands are always the results of previous steps) is, of course, a processor with a reasonably high speed and an absolute reliability. The realization opf such processors is one of the true triumphs of modern technology.

The example of the evaluation of an expression also shows the necessity of a close interconnection between processor and store, since the amount of information flow between the two units is rather high. The store contains objects that must be accessible through distinct names (e.g. x, y, z...). Consequently, there must be an order in the store, like that found in post office boxes. The objects are therefore contained in a set of uniquely identifiable 'storage cells' each of which has a unique 'address'. Each access to the store must be accompanied by a specification of the address of the cell to be referenced.

Cells in a computer store resemble storage boxes used in daily life insofar as they contain and preserve an object. But this is where the analogy ends. The ability of computer stores to preserve data is not based on the fact that they physically harbour an object but instead that a certain 'encoding' of the object is reflected by the state of the cell. The cell must therefore be capable of a certain number of 'discrete states'. It is difficult to realize components capable of assuming and maintaining many clearly distinguishable states over an arbitrarily long time. It is feasible, however, to build such storage elements that having only two distinct states; these are called 'binary' storage elements. If a group of n binary storage cells is combined, this group can assume 2^n different combinations of states. If the group is considered as an indivisible unit, then it represents a storage cell with 2^n possible states.

Example: Encoding objects into groups of binary digits

We choose the positional representation of natural numbers (including 0). A natural number x is encoded in the following sequence of n binary digits (calles 'bits'), that is, zeroes and ones.
	x : bn-1 .... b1 b0
   
where the encoding rule is given by
	x  =  b0 + 2 b1 + ... + 2n-1 bn-1
   
This rule is by no means the only possible, but in many respects it is the most appropriate. After all, it is the same rule on which the representation of arabic (decimal) numbers is based:
	x : dm-1 ... d1 d0
   
and
	x = d0 + 10 * d1 + ... + 10m-1 dm-1
   
Some examples of binary and decimal encodings (representations) of numbers are:

Binary Decimal
1101 13
10101 21
111111 63
1101011 107

The most important lesson to be learned from this example is that finite storage cells - that is, cells able to assume only a finite number of discernable states (no others exist in reality) - are capable of storing numbers only from a finite range of values. In a computer, the number of binary storage elements grouped into a single addressable storage element (a word) is usually called the 'word length'. The capabilities of the arithmatic unit are adapted to this measure. Common values of wordlengths are 8, 16, 24, 32 48 and 64 with corresponding sets of 2^n distinct values.
The consequence of the basic requirement that the computer must be able to obey a given program is that it must have 'easy' access to that program. Where, then, is the most appropriate place to hold that program? It was the brilliant -and nowadays seemingly trivial- idea of 'John von Neumann' to put the program into the store. Consequently, the same store is used to hold the objects and the 'recipe' of the computing processes, that is, the data AND the program.

Obviously, this concept of the 'stored program computer' requires that instructions also be encoded. In our example of an expression evaluation every instruction is representable by an 'operation code' (to specify reading, writing, adding, multiplying, etc) and, in some cases, by an operand. If operands are represented by storage cell addresses, and if these addresses are chosen to be the whole numbers 0, 1, 2 ... then the problem of encoding the program is essentially solved. Every program can be represented by sequences of numbers (or groups of numbers) and can therefore be deposited in a computer's store.
Another consequence of the stored program approach is that every program will occupy a certain number of storage cells, that is, a certain amount of 'storage space'. The number of occupied cells, which are no longer available to hold data, is proportional to the length of the program text. The programmer must therefore aim to keep his programs as concise as possible.

The following important capabilities of the modern digital computer are based on the concept of sharing the store between the program and the data.

  1. As soon as execution of a certain program P is terminated, a new program Q can be accepted in the store for subsequent execution (flexibility, wide applicability)
  2. A computer may generate (according to a certain program) a sequence of numbers that it will subsequently consider and interpret as encoded instructions. Data generated in the firt step become the program obeyed in the second step
  3. A computer X may be instructed to consider sequences of numbers actually representing programs as data to be transformed (according to some translation program) into sequences of numbers representing programs encoded for a different computer Y

Chapter 4 : Programming aids and systems

Until the late 1950's programming consisted of the detailed encoding of long sequences of instructions -- initially written in some symbolic notation -- into numbers in binary (ase 2), octal (base 8) or hexadecimal (base 16) form. This activity is called 'coding' in contrast to programming, which encompasses the more difficult task of designing algorithms. The inadequacies of this cumbersome procedure became more and more apparent with the advent of faster computers having larger stores.

  1. The coder was forced to tailor his programs to the particular characteristics of his available computer. He therefore had to consider all details of the machine, including its processor organisation and its instruction set. The exchange of programs between various machines was impossible and even the most thorough knowledge of coding methods on one machine was fairly useless when applied to another computer. Every institute designed programs of its own and was forced to dispose of them and to code new ones whenever a new computer replaced the old one. It became evident that the adaptation and tuning of algorithms to the peculiar characteristics of a specific computer represented an unwise investment of human intellect.
  2. The close binding of the programmer to one type of computer not only enabled but even encouraged the invention and application of all kinds of tricks to gain maximum machine performance. While this "trickology" was still considered the essence of good programming, the programmer spent considerable time in the construction of "optimal code" whose verification was generally very difficult. It was practically impossible to discover the principles of a program designed by a colleague - and often as difficult to explain those of one's own programs! This artistry of coding has now lost most of its glory. The experienced programmer consciously avoids the use of tricks, choosing systematic and transparent solutions instead.
  3. The socalled 'machine code' contained only a minimal amount of redundancy on the basis of which formal coding erros could be detected. As a result, even typing errors, which could have devastating effects when the program was executed, were difficult and time consuming to discover.
  4. The representation of a complex program as an unstructured linear sequence of commands was a most inappropriate form for the human inspector to comprehend and to express. We will show later that the presence and application of structure is the principal tool in helping the programmer top synthesize systematically and to maintain an overall comprehension of complicated programs.
These shortcomings led to the socalled "high level" 'programming languages'. Such languages became the means through which to instruct an idealized hypothetical computer that is designed not according to the limitations of current technology but according to the habits and the capabilities of man to express his thoughts. In this situation, however, we are confronted with a machine 'A', which is economical realizable but neither convenient nor encouraging to use, and a machine 'B' which is suited to human needs but exists only onb paper. The gap between these two kinds of objects is now bridged by an entity called 'software'. In contrast. the physical machine is called 'hardware'. A software system is a 'program' C that, when executed by the existing computer A, enables A to translate programs written for the hypothetical computer B into programs of its own. Program C is called a translator or 'compiler'; it enables A to appear as the idealized machine B.

The utilization of compiler C thus relieves the programmer of the burden of considering the particular, detailed characteristics of his computer A. But it does nopt free him of the duty to be constantly aware that it is machine A that will ultimately execute his programs and that A has some definite limitations imposed by its finite speed and storage capacity.

Usually, a combined hardware-software system processes program P in two distinct steps, which follow eachother in time. During the first step, P is translated by the compiler C into a form interpretable by A; this step is called 'compilation'. In the second step, the translated program is executed. This step is called 'execution'.

Compilation program   =   compiler C
input data   =   program P in language B
output data   =   program P in language A
Execution program   =   P in language A
input data   =   arguments of computation 'X'
output data   =   computational results 'Y'

Page created D-Day 2009 and