In 1950, fifty-one people attended the Summer School on Programme Design for Automatic Digital Computing Machines at Cambridge University. Over the previous decade, engineering and mathematical researchers had developed the first stored-program computers, and figured out how to operate them as they went; the students who came to Cambridge that summer were the first to sign up to specifically learn the art on Cambridge's EDSAC computer.
The students that attended were a varied lot, with a varying goals -- one was actually a salesman for Ferranti, the company that was going to release the first commercial computer the next year, and he spent more time chatting up potential customers than learning how to program. The physical experience of programming was radically different from how we'd understand it today, as this remarkable film illustrates. Still, they are the predecessors of every young person on summer vacation today who's waiting for college -- and their education in computer science -- to start.
[ Also on ITworld: Welcome to the workforce: Advice for new programmers ]
With tech changing so quickly, many aspects of these young people's education will be different from those on a similar track just a decade or so ago -- different, in other words, from those who will be their peers and co-workers once they reach the job market. If you're a long-time IT professional, some of what you did in school will look as outdated to these new students as those keypunch EDSAC programming techniques look to you. It's instructive to see what has -- and hasn't -- changed over the decades.
What you have to work with
Rob Pierce has enjoyed a smorgasbord of decades' worth of computer education: he took an introduction to computer concepts in the mid-1980s, an introduction to programming course in the 1990s, and is taking a data structures and object-oriented programming class today. One of his observations is basic, but might not occur to someone entering the field today: "One big change is the expectation that everyone has their own computer." The days when computers were room-sized devices like EDSAC lingered on for decades after computer science became a regular part of college curricula.
Nancie K. began her computer science undergraduate life in 1981, just as computers really started getting personal. She entered a world in transition -- and one where rank had its privileges. "When I started, in '81, the university had around 12 terminals available for the CS department, hooked to the single mainframe the university owned. Only seniors and grad students were allowed to use them; everyone else had to use punch cards." Even getting access to one of those terminals wasn't the golden ticket, though: "They were connected to a mainframe the university leased space on, owned by one of the major banks in Florida. Of course, the university jobs had the lowest priority on the computer. I once waited 45 minutes just for a logon prompt -- after that I continued to use cards until my final semester."
But Nancie had an ace up her sleeve: a TRS-80 Model III of her own. Many of her classes were taught using assembly, Pascal, or COBOL, but some were language-agnostic, and for those "I used Fortran, because I had a Fortran compiler for my TRS-80 Model III." She ended up pushing her hardware to the limit: "the TRS had 48 KB memory, and on at least one occasion I had two versions of my homework. One had comments and was nicely formatted, the other had all of that removed so that the TRS had enough memory to actually compile and run it."
[ See also: Not your father's computer science building ]
Just five years later, when Pierce took his first class in 1986, his hardware environment was quite different: a lab full of Apple IIs. If you need a sense of how computer education, and indeed the whole industry, transformed in a very brief period, imagine the leap from using punchcards to microcomputers in half a decade. By contrast, Dr. Nick Carlson, a civil engineer and an instructor at New Jersey community colleges, considers the work environment he oversees today -- "a lab full of networked desktops running Windows" -- as being essentially the same as what he would've used when he took his first programming class more than fifteen years earlier.
How far do you drill down?
When I was taking computer classes in high school in the late 1980s, we discussed transistors and logic gates, not that I really remember much of it or ever fully grasped how it related to programming a computer. Still, I wondered if that was something that anyone in school today would still be expected to understand at an introductory level. Nancie K. may have been working on assembly language code on in the early 1980s, but in Rob Pierce's experience, modern-day classes are quite different. "C/C++ has been replaced by higher-level VB and Java," he says, "and 'while' and 'for' loops are taught long before stacks and pointers."
"I have an uncle who worked for IBM back in the 1960s and he used to program with a soldering iron, so he says. He did programming in assembly. I've never had a class in assembly." In fact, says Pierce, "none of the intro classes teach assembly language. At the junior level there's Computational Structures, which goes into the theory and math behind binary logic and arithmetic, graphing, shortest-route, and so on. Another class goes into the hardware side of this, with logic gates and some assembly programming."
It seems that while computer science once took a view that to learn the discipline you had to trace it to its beginnings on practically bare metal, today that's considered an advanced-level branch of study. You don't need it for the basics.
But what's it all for?
Beyond the nuts and bolts of what specifically you'd study and what machines you'd use to study it on, there's a bigger question looming over the field: why would you bothering studying the subject at all?
[ Next read this: For the best ROI, get your computer science degree at a state school ]
My impression is that the level of practicality involved waxed and waned over the years.
In the '80s, when Nancie K. was taking classes, for instance, she says that "the focus was definitely on number-crunching and databases on mainframes for major systems. I had vague notions that I'd be writing programs to track and/or analyze numbers, probably for a government contractor. As it happens, that's about what I was doing my first five years after graduation. And minus the government contractor, it's what I'm still doing."
This was an age when for some years, the field had been -- not stagnant, necessarily, but mature. Computers as data processors were well understood by businesses, and were very lucrative for both companies that sold them and organizations that used them. But a revolution was brewing. "Things actually had started to shift by '84 -- there were a few computer graphics classes in the catalog by my last semester, as well as some hardware-focused microcomputer/PC classes. The Last Starfighter came out around that time, so people were starting to see the creative things computers could do."
By the turn of the century, the classes Dr. Carlson took in high school and college had a very different feel. "Both programming classes I took, especially the intro to programming for engineers class in college, were very focused on just teaching the isolated mechanics of the language -- this is a function, this is an if statement, this is a loop, etc. -- without any real consideration to applications besides whatever contrived problems were on the homework. They weren't taught with the idea that you'd eventually want to try to put these pieces together into a bigger whole that would actually do something relevant or useful."
He sees this too as a reflection of the times -- there were kids who "enjoyed computers for the sake of tinkering with a computer," as the PC revolution had brought a wave of machines into homes but hadn't quite cooled off to the point that they had become dull appliances. "I feel like whoever was designing the curriculum was vaguely aware that people taking the class might want to do 'stuff with computers' in the future and didn't feel the need to try to tie it to any other discipline."
The class Dr. Carlson teaches now is called "programming for engineers," and is much more aimed at practical use. The language it's based on is MATLAB, "a numerical programming language that's fairly popular with academia and engineers." In his class, "the syllabus for the programming for engineers class is extremely focused on how to apply programming as a tool, starting with the typical problems -- things like 'how do I analyze this instrumentation data?', 'how do I simulate this physical process?', 'how do I automate this repetitive calculation?' -- and showing how the language fits into the solution, introducing the new concepts they'd need to solve it. Writing the code is just one step of the whole solution, since the students need to understand the physical basis of the code they're writing and correctly interpret the output it produces. The class is also set up to show students the limitations of computers, like floating-point accuracy or the concept of garbage-in, garbage-out, which is definitely relevant to them since they'll be using software for design even if they never write their own code again."
In this way, the students of today aren't that different from the fifty-one people who attended the Summer School on Programme Design for Automatic Digital Computing Machines at Cambridge University in 1950. Those students didn't all necessarily want to become computer scientists -- it wasn't particularly clear that "computer science" was it's own thing, yet. Many were chemists and mathematicians who were just excited about the practical ways new technology could make their existing jobs easier.
A true vocation
In fact, the practical needs of both students and employers have given rise to a whole category of computer science education under the aegis of schools that aren't colleges at all. These "code schools" are aimed at eschewing theory and giving students practical skills in a short amount of time. As Christopher Mims put it in the Wall Street Journal, "we've entered an age in which demanding that every programmer has a degree is like asking every bricklayer to have a background in architectural engineering."
[ Don't miss: Degree-free IT: Can college-skippers thrive in tech? ]
And indeed, employers are looking for a focus on the practical as well. It's not that the skills colleges teach are obsolete -- but students seem to need extra help once they hit the industry. Facebook puts its new hires through an intensive programming course when they arrive, for instance.
Dave Parker, CEO of Code Fellows, a software programming trade school in Seattle, says that "there aren't a lot of IT jobs left managing hardware or server farms, some of those old degree programs need to recognize that and move on." And the software stacks that the programmers use in day-to-day life change more quickly than college curricula can accommodate, says Parker.
While MOOCs like Udacity have made sweeping claims that they'll replace universities, Parker doesn't see CS degree programs going away anytime soon. But courses like his company offers are a useful supplement. "Most employers still want CS grads with five years (real world) experience. We get to deliver 'life experience' candidates with fresh technical skills on a stack that is in high demand today. It doesn't replace a CS degree, but depending where you are in your career it's a great alternative."
Perhaps the best way of thinking about the future of CS education is that even if you've got a degree, you're going to keep needing an education.
[ The last word: 12 fun, free summer classes for techies ]
This story, "Theory, practice, and fighting for terminal time: How computer science education has changed" was originally published by ITworld.