<!-- saved from url=http://www.sathyasai.org/discour/2000/d001225.html -->

# ITiCSE

## Banquet Speech Problems in Computer Science Education David Gries cs@uga.edu, cs@cornell.edu

Good evening!

It is a pleasure to be here at this conference, even though I have to sing for my supper!

But it takes more than singing to give such a speech. While you were eating away, I spent my time moving people away from their eating trough and getting the projector set up there so you could see the projector screen. Such banquet speeches are harder than keynote speeches, because they have to be entertaining as well as make a point. And the rooms in which banquet speeches are given are often not conducive to such activities. This particular room is a 6-pillar room --"pillars" being a measure of difficulty because of the way they get in the way of seeing. On the shape scale, it measures 2 out of 10 because its el-shapedness means that some of you can't see anything. Well, after that good meal, you would probably rather sleep anyway.

This restaurant is called "Varna". Varna is also a town two miles from Ithaca, NY, where I spent over 30 years teaching computer science. The Dalai Lama came to Ithaca in the early 1990's and set up a small Buddhist temple there. Everyone wondered why he chose Ithaca, of all places --a small, centrally isolated town of 35,000, that's about 2 and a half times the zip code. For me, it was clear why the Dalai Llama chose Ithaca: Ithaca is Near Varna [pronounced nirvana]!

The pleasures of teaching introductory programming

These days, there's a lot of debate going on about what programming language to teach in the introductory course and how to teach it. Niklaus Wirth discussed this issue in his keynote yesterday morning. The debate has been going on ever since I can remember, and I got my PhD and started teaching in 1966, 36 years ago.
We went from Fortran to Algol 60 to Algol W to PL/1 to Pascal to C/C++ (God help us) to Java, with side steps to Lisp or Scheme or Haskell or Prolog. At each step, we debated and worried about the availability of good compilers (or programming environments) and textbooks. Each time, it was deja vu all over again.

But there are distinct differences between then and now. Then, we argued about technical merit and pedagogical concerns, about things like elegance, and simplicity, and style.

```          Language shapes the thought and culture.
```
said Benjamin Whorf, and we believed this saying and took it to heart. But the switch to C/C++ was based not on pedagogical and technical considerations but on what students need to get a job and popular sentiment from outside CS. No one has ever said that C or C++ were pedagogically good languages to teach to beginners. No one believes that. It was pressure from industry and and other academic departments, who wanted their people to be able to program in C/C++, that carried the day.

We sold our soul when we adopted C/C++. Well, some of you did. Wirth didn't. I didn't. Our department vote for C/C++ at Cornell was 23 to 1. I was the 1. I gave all the right pedagogical reasons why C/C++ was the wrong choice. No one listened; they didn't care about pedagogical concerns. They cared about what industry was doing and what some Engineering College departments wanted.

I refused to teach the introductory course when it switched to C/C++; I have never taught these languages or used them in a course, and I never will.
A year after the switch from C/C++ to Java in the introductory course at Cornell, all the later courses like operating systems and compiler construction voluntarily switched to Java because it was so much easier on the instructor and student! Then, another debate began: Our students HAVE to learn C/C++ to get a job. Which course should be required to use C/C++? They chose data structures, of all courses! Needless to say, after that I did not teach data structures and algorithms.

The C/C++ debacle is a symptom of an incurable disease that no other field has. CS is too close to our area of application, in fact, they are becoming the same. No one would tell a physicist what research to do or how to teach their subject. No one would tell a chemist what notation to use on the blackboard. But everyone thinks they know enough about computing to tell us what to do and when to do it. Outside forces influence us too much.
In his keynote talk yesterday, Wirth said that he would like someone to write an introductory programming text that (0) would be moderate in size, (1) would contain a concise, formal definition of the language being used (no more than 20 pages), (2) would start with a succinct introduction into the basic notions of program design, (3) would establish intuitive and precisely defined terminology, (4) would discuss the basic concepts of iteration, recursion, assertion, and invariant, and (5) would have as a central topic the structuring of statements and typing of data.

Even if someone writes that book, I am pessimistic about people using it, because it won't use a popular language! Who is going to use a book that doesn't use C/C++ or Java? Out of the question. It's a symptom of the main problem of our field; we are influenced too much by outsiders.

Wirth's fame

Niklaus Wirth is one of the people who has had a good deal of influence on computer science education. Wirth's first claim to fame is not his PhD work but the introduction given to him by Van Weingarten at IFIP 1965 in New York. It had to do with the fact that Algol 60 had two methods of passing parameters, called "call by name" and "call by value". "Call by name", which allowed something called "Jensen's device", hasn't appeared in a programming language since then, I don't think. It was simply recalled.

In 1962-63, I was in Illinois, working on a PhD in mathematics, and helping two Germans, Manfred Paul and Hans Ruediger Wiehle, write one of the first complete Algol 60 compilers, the Alcor Illinois IBM 7090 Algol compiler. One Friday, Paul and Wiehle said that if we didn't figure out how to implement call by name and recursion efficiently by Monday, we were going to have to leave recursion out. I, a lowly Math PhD student at the time, spent the weekend coming up with the implementation, feeling very proud of it. I didn't know about or have access to work by Edsger W. Dijkstra at the time, although we did have a draft of a paper on "thunks".

Back to Wirth and Van Weingarten's introduction of Wirth, which is what made Wirth famous, To make him even more famous, I have encapsulated the event into a few limericks.

```	'65 IFIP was fun
When [van] Weingarten made a neat pun
With Algol 60 dealt it
How its args do transmit
When passed to a proc or a func.

Call by value is known to us all
It's in Java and Lisp and Cobol
Call by name is quite diff'rent
Needs a thunk to it implement
After Algol [60] it was recalled.

So his opportunity came
To give this man Niklaus some fame
U.S.A. says Klaus WORTH,
Call by VALUE, we smirk,
Says Europe, Klaus WIRTH --call by name!
```

The Software Engineering Conferences

To the left is a picture of Wirth from just a bit later, at the 1969 NATO Software Engineering Conference. (Click on any picture to get a larger version.) The bad thing about Wirth at the time was that he didn't have a beard, but we worked on him, and the situation has been rectified. To the right, we show how he looks now.

The NATO conferences on Software Engineering, in 1968 in Germany and 1969 in Italy, were momentous occasions, where for the first time industry and academia got together and discussed the fact that they didn't know what they were doing. The first conference made the terms "software engineering" and "software crisis" noteworthy. If you have the chance, I urge you to go to the website,

download the proceedings of those two conferences, for they are of great historical interest and will give you a different perspective of those times. They will tell you not only of software engineering but also of programming and education in computer science in general. Also, most of the pictures shown in this talk come from that website. Brian Randell did a nice job of putting the website together.

Here am I at the time, with a beard, of course. I had received my PhD in 1966 from MIT --the Munich Institute of Technology. I went there after spending one year in Illinois because Manfred Paul and Ruediger Wiehle were returning to Munich. They said I should do my PhD in Munich, but under their breath they were saying, "where we can make sure you finish the Algol compiler". Needless to say, I did both. My advisor, F.L. Bauer, was one of the organizers of the 1968 NATO conference, so I was invited, To the right is what I look like today --still with beard.

Here are two group pictures taken at the first 1968 Software Engineering Conference, one inside and one outside. One thing you should notice is how civilized people were in those days --everyone wore a suit and tie! Look around you at this banquet. Perhaps 2 or 3 people have a jacket and tie, including me of course. The rest of you are dressed in any old thing, showing no respect for this conference, its organizers, and me. Just look around at all of you! Well, on second thought you look much more comfortable than I do. It's hot and sticky. I'll just take off my jacket and tie and get more comfortable, like you. There, that's better.

The editors of the Proceedings of the 1968 conference were Peter Naur and Brian Randell. With beards, of course. Peter Naur of Denmark is most famous for his editing of the Algol 60 report, one the best language definitions ever seen. Brian Randell and Peter produced the report of that conference, which is different from almost any proceedings you see. Much of the discussion was taped or written down on-the-fly and later transcribed, so it contains the back-and-forth discussions that took place. Definitely worth reading.

Edsger W. Dijkstra, bearded, of course, has done more for computer science and CS education than almost anyone, although his work is not appreciated or even known to many programmers and computer scientists these days. He received the ACM Turing Award in 1972, and this was before his work on weakest preconditions, guarded commands, the formal development of programs, and mathematical methdology. The Turing award was given for his work on operating systems and structured programming.

At the second Software Engineering Conference, Edsger at one point said something that evoked a great deal of applause, spoken in his usual forthright, honest, and sometimes caustic manner:

What is actually happening, I am afraid, is that we all tell each other and ourselves that software engineering techniques should be improved considerably, because there is a crisis. But there are a few boundary conditions which apparently have to be satisfied:

1. We may not change our thinking habits.
2. We may not change our programming tools.
3. We may not change our hardware.
4. We may not change our tasks.
5. We may not change the organizational set-up in which the work has to be done.
Now under these five immutable boundary conditions, we have to try to improve matters. This is utterly ridiculous.

Unfortunately, some of these points hold for computer science education today. For example, most authors of introductory texts haven't changed their thinking habits regarding the use of loop invariants. The theory and practice of loop invariants is 30 years old. In 1973, I wrote the first intro to programming text, using PL/1, that seriously discussed loop invariants as a practical tool. Yet, today, half the introductory programming texts don't mention loop invariants, and most of them that do don't actually use them. Without their using loop invariants, I don't understand how they can teach the development of loops --e.g. what the initialization of a loop is for, or how to find a loop condition--, or algorithm partition of quicksort, or even algorithm selection sort.

Changing our programming tools in our courses, especially the programming language,  is becoming harder and harder. The procedural or object-oriented language taught must be one of the popular ones: first C/C++ but now Java. The various versions of Modula, Oberon, and Eiffel, don't catch on in academica because students and teachers alike don't think they are popular enough --we must teach something that students will use when they graduate. The fact that all the high schools in the US, and therefore most of the universities in the US, teach whatever the educational testing service uses in their "Advanced Placement" tests strangles choice to some extent.

So, Dijkstra's remarks in 1969 still hold.

Here's one more person who was at a NATO conference and who has had a great influence on our field: Tony Hoare. He is the author of algorithm quicksort, developer of the first axiomatic basis for programming, author of both the notation and the theory of CSP, and countless other creative works.

After the conference, Dijkstra went up to Tony , "Tony, your creative juices, as well as your ability with words, lie in the length of the hair on your chin. Remember Sampson! His strength was proportional to the length of his hair; creativity in men in our field, as well as a way with words, depends on having a beard." As you can see to the right, this had an effect on Tony. Edsger went on, "I have been wondering what makes a woman creative, for certainly it is not a beard. I see many creative women, but I haven't been able to pinpoint the attribute that makes them creative. I have some notions about it, but I am not  prepared to divulge them yet."

[Note: I had the sentence about women's creativity prepared, but it slipped my mind and I didn't say it during my talk. At least ten women came up after the talk and commented on the omission. I will not repeat what a few of them said was the source of creativity in women.]

Read any of Hoare's papers and you'll see that the beard helped. He has a way with words that perhaps is second only to Dijkstra. Here's one of his more famous ones:

Algol 60 was a great achievement; it was a significant advance over most of its successors.

There are two ways to construct a piece of software: One is to make it so simple that there are obviously no errors, and the other is to make it so complicated that there are no obvious errors.

In summary, we programming educators as a group are listening too much to what others hve to say and not concentrating on the concepts and principles we are trying to teach. As long as we listen to outsiders so much, we will not be able to change the thinking habits of our students, which is what is desparately needed.

Problems elsewhere

The problems of education don't have to do only with programming. One other important topic is the mathematics we teach --and how we teach it. Let's concentrate on the discrete math.
Here's how most students come into the traditional discrete math course:

1. Know very little math.
2. Think math is useless.
3. Afraid of math and its notation.
And here's how they come out:
1. Know very little math.
2. Think math is useless.
3. Afraid of math and its notation.
4. Hate discrete math.
You laugh, but it's true. The problem is that discrete math is usually taught as a hodgepodge of unrelated topics, most of which, the students feel, have no application at all. For example, logic is usually taught in 1 to 2 weeks, after which the notation and concepts are not used. So the student gets the idea that logic is really useless and the time was wasted.

Further, the traditional discrete math course does nothing to allay the students' fear of math. The students get no real help with learning proofs and how to develop them.

There is a workshop here at this conference on discrete math. Peter Henderson, who introduced me, is running it quite ably. One of the important functions of this workshop is to develop examples that show how useful math is in different areas of computer science, including programming. These examples will be put on a repository, most likely a web page, where instructors can download them and incorporate them into their courses. One of the major problems is to keep the repository up to date over the years, to ensure that people view it as a live, useful, tool. This work is important.
However, this is only a small step in the right direction. Of more importance, as I see it, is to change the way we think about discrete mathematics and, in particular, logic. Logic is extremely important. As Fred B. Schneider and I said in our text "A Logical Approach to Discrete Math",

## Logic is the glue that binds together methods of reasoning, in all domains.

The traditional proof methods ---for example, proof by assumption, contradiction, mutual implication, and induction--- have their basis in formal logic. Thus, whether proofs are to be presented formally or informally, a study of logic can provide understanding.

But the logics that logicians have developed and want us to study are awkward to use and not at all what mathematican use. The reason for this is simple. Logicians don't USE logic, they just study it. We want to USE logic, so we have had to develop our own.

Since the 1980's people in formal program methodology --Edsger W. Dijkstra, Wim Feijen, Netty van Gasteren, Roland Backhouse, myself, and a host of others, have worked on a "calculational logic", that is, a real, live logic that provides the theory under which a great deal of mathematics is done. It uses the kind of proof shown on the right. You will find this kind of proof all over calculus, group theory, and so on. Students may do a few such proofs even in high school, but they are rarely shown the foundations behind such proofs and strategies and principles for developing them.

Fred B. Schneider and I wrote the above mentioned text, "A Logical Approach to Discrete Math", basing all the discrete math in the course on calculational logic. I have taught several times using it. We spend 6-7 weeks on calculational logic (which includes quantification), not only showing proofs but teaching strategies and principles for developing proofs. The students themselves develop over 50 calculational proofs during the semester. Then, we USE the calculational logic in developing set theory, a theory of integers, combinatorics, solving recurrence relations, and the like. Students come away with a better understanding of mathematics. Some say at the end of the course that they are not so afraid of math anymore. Others say that they are using the skills they learned in other courses. Moreover, they see calculational logic as a useful tool and as the thread that binds all the topics of discrete math together.

Others have had similar experiences. Stan Warford of Pepperdine teaches discrete math to freshmen. He says the students used to hate developing proofs and he used to hate grading them. Now, he says, its different. Student talk together about their proofs; they almost enjoy them. And he finds the grading to be easy, because it's obvious where a proof is wrong.

And yet, this calculational logic has not caught on very much, Why not? Because it requires a change in one's thinking habits. Put purely and simply, people, including current instructors of discrete math don't want to change their habits. They prefer to continue on in their current way, mistaking what is currently comfortable for them as the most convenient way to do things. They refuse to take hte time to investigate anything that requires them to think beyond their current habits. As mentioned earlier, Dijkstra said in 1969 that we are not allowed to change our habits. As Mark Twain said,

## Nothing needs changing so much as the habits of OTHERS.

Real progress cannot be made if we simply continue to make small advances without changing how we think about thing! That just doesn't work! We all have to be on the lookout for ways to change how we thinkingg --for the better, of course.

In conclusion

I am speaking to a group of people whose interests are deeply in computer science education. I ask you to think carefully about what you are teaching, and how, and why. Think about the pedagogical tools you use. Ask yourself whether what you are teaching is preparing students in the best possible way for the future.

Then see how you can advance your teaching, by taking small steps that advance your teachng skill and help the field. Participate in projects like the discrete-math group's repository of examples of math in computer science.

But also take great leaps forward, once you have investigated and believe they really will help, even if it requires changing your thinking habits. Of course, any big leap is taking a risk, and many people don't want to take risks. I say,

## The person who wants the best apple must go out on a limb.

With that, let me close this speech; it has probably gone on too long. Again, I thank you for this chance to sing for my supper. It has been most enjoyable.

Page created by David Gries and