Evolution of Programming
Methodology, Part I
Why programming's
evolved much slower than hardware
By Bill
Nicholls
February 28,
2000
Despite
several generations of rapid hardware advances since the
1960s, software has barely ambled through one generation.
The classic description that the "Cobbler's children have no
shoes" can be applied directly to programmers.
While programmers have diligently built end-user castles,
they are still using stone-age tools based on hand labor.
Recent developments with objects, components and patterns
have opened the potential for quantum jumps in programmer
tool capability and productivity. What will it take to make
this leap?
The search for better ways to solve problems using
computers has very slowly led to discovery of more effective
programming methods. The small size and relative simplicity
of early 1960s programs made success easy. This led to the
false assumption that difficulty increases linearly with
program size. Later experience showed difficulty was
exponential based on the number of interactions within the
program.
What took longer to become clear was that there was no
"silver bullet" to kill the problem of programming
complexity. Early efforts with structured programming
reduced this problem, but added a new set of interactions.
Each additional technique added some power at the cost of
additional training and new ways to go wrong.
The new methods of objects and patterns differ from their
predecessors, yet the latest tools do not seem to herald the
dawn of a new programming age. In part one of this column, I
will show the evolution of methods from the dark ages of
spaghetti programming through the discoveries of
subroutines, structure, project organization, and programmer
teams. This historic approach will make the slow pace of
progress clear to those who have not lived through it.
In part two, you will see a major change in methodology
with Model-View-Controller (MVC) methods, then
Client-Server, Objects, MVC again and Patterns. Still, even
today we are well short of what I would call a programming
revolution. What is needed is a quantum jump in programming
methodology. I'll venture some analysis and suggestions on
how that could happen at the end of part two, next month.
The Dark Ages
In the beginning, the mid 1950s
to early 1960s, there was chaos. There was no formal study
of programming, no university degrees in computer science,
just people trying to solve problems that were beyond a
roomful of calculators.
Programming began with writing code in decimal numbers,
positioned in specific locations in the computer's memory.
This was at the lowest level- direct entry into the computer
memory prior to starting the computer at the first
instruction. I used this technique with an IBM 1620 at the
University of Notre Dame. One instruction to read the first
card into the reader, another to jump to the first location
of the card buffer. Booting, 1963 style.
Fortunately, my school had experienced people who had
extended the Fortran compilers. Professor R. S. Eikenberry,
an aerospace engineering professor, along with others had
developed a load and go Fortran compiler. The original
Fortran, not Fortran II or IV. It was called 'DoAll
Fortran.' The compiler resided in memory, read control cards
and compiled and executed the student's programs, then
punched cards for output. The student carried the punch deck
over to the IBM 407 and listed them out on the traditional
green bar paper. It was a major productivity advance over
the previous method of teaching students programming.
>>>Next
Page