Computational thinking revolution

Over the past years I’ve heard plenty about the importance of teaching computational thinking in schools. After 55 years of computational thinking advocacy work by computer scientists, educators have finally woken up to the great epistemic changes in science and in doing science.

See, as early as 1960 Alan Perlis started to talk about “algorithmizing,” and in his famous 1968 article What to Do Till the Computer Scientist Comes George Forsythe started to campaign for seeing computing as a general-purpose thinking tool. In the mid-1970s Statz and Miller campaigned for “algorithmic thinking,” and in the 1980s Knuth discussed its importance, too. The same ideas were turned into “computational thinking” in the mid-1990s by Seymour Papert, and the latest strong champion for computational thinking is Jeannette Wing.

But what is important to understand about this history is that the computational thinking revolution won’t be started by computer scientists. Whatever we do won’t change how people in other fields see their own disciplines. That is not to say that there won’t be a revolution, because there will be. But the computational thinking revolution in schools will start from the sciences: biology, physics, mathematics, and it will start when those fields have a generation of young teachers who are used to the new ways of doing science and to seeing their own field through the computational lens.

Why Binary?

I gave a talk on computing’s development the other day, and ended up in a debate with a colleague about the importance of the binary numeral system in computing. Among other conceptual developments that underlie modern computing, I mentioned the binary numeral system and noted that today’s popular narrative about computers is “it’s all about ones and zeros”. My colleague disagreed about its importance, and argued that binary numeral system is not necessary but contingent: You can build computers and reason about computing using any reasonable numeral system. And he is right indeed, both from theoretical as well as engineering perspectives: ENIAC was a decimal computer and Russians built ternary computers at some point. And Knuth considered the ternary numeral system “the prettiest” number system of all.

But binary numeral system still holds a special place in digital computing for a simple reason. You need at least two different symbols, but you don’t need more than two symbols. From an engineering perspective, binary arithmetic is a good choice because (as Burks, one of the early computer pioneers wrote) so many things are “naturally” binary. Binary arithmetic simplifies computer architecture.

Computer Science is Not Bad Physics

Methodological meta-studies became popular in computing in the 1990s. Their aim was to survey large samples of papers in computing to find out facts about methodology in computing disciplines. Literally dozens of well done methodological reviews of computing were done, and the findings were unsurprising: Computing is a methodologically eclectic field. Those meta-studies were useful for computing’s disciplinary discussions: they replaced anecdotal evidence with empirically justified data.

Alas, many of them made assumptions that some of us might be unhappy about. Many of them used categorization schemes from other fields. Some measured quality by quantity: more lines about methodology, the better. Many used samples that were biased towards very specific publishing venues.

But the biggest problem, in my opinion, was that many of them had overt or covert normative intentions: They did not stop at describing what computing is, but went to to prescribe what it should be instead. (Hume’s guillotine, anyone?) Some compared research in computing with research in other fields, and argued that because some things were different between field X and computing, computing should be improved. And there’s the problem. Why should computing’s publishing profile look similar to that of optical engineering or physics? If you use the criteria of physics or anthropology to evaluate computing, what will you learn? You’ll learn that if computing were physics, it would be bad physics; and if computing were anthropology, it would be bad anthropology. But computing is a unique science with its own methods, approaches, and research agenda — and hence, one should be careful to compare research in computing with research in other independent disciplines.

What is more, it seems that many sciences are turning out to become more like computing than computing turning out to be more like them.

Computing’s Ontology (Part I)

One thing that puzzles me are the frequent comments on how computing makes ontology so difficult. Some say it’s virtuality that makes ontology hard. Such comments permeate our field.

Someone posed questions about how does money exist if it’s not “real” anymore but virtual in some cloud service or bank server. Someone else wrote that windows on the screen are at the same time virtual and at the same time physical (or something like that – I can’t remember anymore). Someone else – well, this has been repeated over and over – was of the opinion that ontology of programs is tricky because they’re at the same time abstract (in the same way mathematical objects are abstract) and concrete (they have causal power). One computer scientist said that information (one of our subjects) is neither matter nor energy.

What’s so hard there? Consider the realist ontology: the one underlying scientific realism (the following version is from Searle, who’s a master of putting things clearly):

The world is made of particles in fields of force. Fundamental forces join some particles together to make increasingly complex combinations (molecules, minerals, rocks, mountains). Some of those combinations have organized into living systems, and some of those systems have evolved consciousness. Some forms of consciousness enable intentionality: the capacity to represent objects and states of affairs in the world to oneself.

Now consider virtual currency in this ontology. There’s nothing magical, nothing transcendental about it. Virtual currency exists in the form of magnetic charges on a hard drive platter somewhere. When you buy something from the store, a configuration of magnetic charges somewhere changes.

What about windows on the computer screen? There’s nothing vague about their existence: just dig deep enough into the mechanism. Patterns (electric charges) in graphics memory are translated into signals (streams of electrons) that are sent to the monitor, which translates those signals to voltages in liquid crystal cells, creating a pattern on the grid of liquid crystal cells, and each cell on the grid lets different colors from backlight panel to pass and meet the eye. There’s the window on screen, in very much physical terms. There’s nothing uncertain about “how” or through which mechanisms windows on the screen exist. Only for those who doesn’t know how technology works such things may appear supernatural.

There’s nothing mystical about the ontology of programs, either. A program in an executable form exists as electric and magnetic charges ready for some causal action. A program in a text form exists as black ink entangled in a grid of cellulose fibers. A program in my mind exists as a specific electrochemical and physical configuration in my brain (I guess that the jury is still out on the specifics of that, though). The realist ontology is simple and straightforward.

And the claim that information is neither matter nor energy? Well, I’m no physicist but I first thought that everything’s gotta be matter or energy, though it learned that I was wrong and the case isn’t that clear after all.

Nevertheless, virtual things exist the very same way everything else exists: as patterns of particles in fields of force. There’s nothing ambiguous about their existence. But it’s pretty sure that this ontology doesn’t bring questions to an end. On the contrary, this simple ontology gives rise to a ton of much more interesting questions: Through which mechanisms are ideas shared between individuals? What in the world or in our brain makes it possible that people discover the same theoretical ideas independently? And then agree about those ideas?

Questions like those aren’t easy. But it looks like answers to them can be discovered without postulating extra worlds that science hasn’t yet discovered.

Science of Microscopes

In their famous 1967 defence of computing as an independent discipline, Allen Newell, Alan Perlis, and Herb Simon argued that the computer is so different from other instruments, like the thermometer and microscope, that its study warrants a discipline of its own.

Given that they compared the computer with the microscope, it’s interesting to note that there once was an exciting new field called microscopical science. According to the field’s early journal Quarterly Journal of Microscopical Science, that science was about advancing technical information about the microscope and research findings from using the microscope. The journal stated in its inaugural issue in 1853, that improvements in technology had made the microscope readily available for research, a large numbers of researchers from various disciplines used it, and there were academic societies devoted to it. They defended the name “microscopical science” for that new, exciting, and unique field. Fast forward 100 years, replace “microscope” with “computer” and the same arguments were tossed around. The text above could easily have been from the 1960s defences of computing’s disciplinary identity.

I don’t know about the disciplinary status of microscopical science or microscopy today. But I know that the journal above was in 1966 renamed Cell Science. I wonder if the hundred-year delay will still apply, and if our field will be around 2060 eventually re-branded “informatics,” “datalogy,” or “algorithmics.”

Computers, telescopes, and studying them

One of the famous computer science one-liners states that “computer science is no more about computers than astronomy is about telescopes.” It is most commonly attributed to Edsger Dijkstra, but I’ve struggled to find where exactly did Dijkstra say that. It seems that no-one who attributes the quote to Dijkstra includes its source. But keyword search on “telescope” and “astronomy” can’t find that quote in Edsger Dijkstra Archives. The ACM Digital Library’s keyword search doesn’t work very well for old scanned documents, but it doesn’t find any occurrences of “telescopes” in that context before 2000, in a Ubiquity article where William Wulf wrote, “We don’t have a science of telescopes. We have a science of astronomy! Well, now we have a science of computers, and it’s well accepted.

Digging deeper, I found Michael Fellows mention the actual quote “CS is no more about computers than astronomy is about telescopes” in a 1991 paper “Computer SCIENCE and Mathematics in the Elementary Schools”, and I found a story of the quote in that paper. The story tracked down Dijkstra’s use of the phrase to a 2001 video for the Dutch TV.

The same idea was expressed already in 1974 by J. Hebenstreit in the form “while the processing of information is a science it is not one which can be apprehended by merely studying the basic tools, i.e. the computer and programming languages, any more than astronomy can be reduced to the detailed study and operation of telescopes.”

Yet, what’s important about that quote is how broadly it was adopted, and what sort of an image does that give of computing as a discipline and of the people in the field.

Testing can’t even show the presence of bugs

In the 1968 NATO conference on Software Engineering, Dijkstra criticized testing as a “very inefficient way of convincing oneself of the correctness of programs”. In the next year’s follow-up conference, his objection turned into the famous judgment that “testing shows the presence, not the absence of bugs” (SoC, p.123).

Dijkstra’s statement was analogous to falsificationism in the philosophy of science: an experiment can disprove a theory, but never prove it to be correct.

But although Dijkstra was strong in his conviction, he did not take his argument far enough. In the philosophy of science, the Duhem-Quine thesis notes a weakness in falsificationism: when an experiment yields anomalous results, one can never be sure what caused those anomalous results. It’s not always the case that the theory was wrong: The anomalies could have resulted from anything in the test situation, including the theory, apparatus, theory of how the apparatus should work, or any of the auxiliary hypotheses and theories.

Similar, when program testing yields erroneous results, those erroneous results could result from a bug in the program, a compiler bug, a hardware bug, an operating system bug, a problem with the whole system’s interoperability, a cosmic ray interfering with the circuitry, and so forth. That is, running a perfectly correct program can give erroneous results regardless of the program correctness. Similar, a compiler bug may cancel out a program bug, letting a buggy program pass the test. In other words, testing can’t even show the presence of bugs in a single program. Testing can only show that “something” went wrong.

What is Computer Science?

What would be a better way of starting a new blog on computing than tackling one of computing’s perennial questions: “What is computer science”? ¬†I thought that I’ll give a short¬†answer in a video format.

In this blog, I’ll continue to discuss this topic and related topics to more detail. (The references that I mention in the slides are from the book The Science of Computing: Shaping a Discipline.)