Entries Tagged 'Philosophy of Computing' ↓

Why Binary?

I gave a talk on computing’s development the other day, and ended up in a debate with a colleague about the importance of the binary numeral system in computing. Among other conceptual developments that underlie modern computing, I mentioned the binary numeral system and noted that today’s popular narrative about computers is “it’s all about ones and zeros”. My colleague disagreed about its importance, and argued that binary numeral system is not necessary but contingent: You can build computers and reason about computing using any reasonable numeral system. And he is right indeed, both from theoretical as well as engineering perspectives: ENIAC was a decimal computer and Russians built ternary computers at some point. And Knuth considered the ternary numeral system “the prettiest” number system of all.

But binary numeral system still holds a special place in digital computing for a simple reason. You need at least two different symbols, but you don’t need more than two symbols. From an engineering perspective, binary arithmetic is a good choice because (as Burks, one of the early computer pioneers wrote) so many things are “naturally” binary. Binary arithmetic simplifies computer architecture.

Computer Science is Not Bad Physics

Methodological meta-studies became popular in computing in the 1990s. Their aim was to survey large samples of papers in computing to find out facts about methodology in computing disciplines. Literally dozens of well done methodological reviews of computing were done, and the findings were unsurprising: Computing is a methodologically eclectic field. Those meta-studies were useful for computing’s disciplinary discussions: they replaced anecdotal evidence with empirically justified data.

Alas, many of them made assumptions that some of us might be unhappy about. Many of them used categorization schemes from other fields. Some measured quality by quantity: more lines about methodology, the better. Many used samples that were biased towards very specific publishing venues.

But the biggest problem, in my opinion, was that many of them had overt or covert normative intentions: They did not stop at describing what computing is, but went to to prescribe what it should be instead. (Hume’s guillotine, anyone?) Some compared research in computing with research in other fields, and argued that because some things were different between field X and computing, computing should be improved. And there’s the problem. Why should computing’s publishing profile look similar to that of optical engineering or physics? If you use the criteria of physics or anthropology to evaluate computing, what will you learn? You’ll learn that if computing were physics, it would be bad physics; and if computing were anthropology, it would be bad anthropology. But computing is a unique science with its own methods, approaches, and research agenda — and hence, one should be careful to compare research in computing with research in other independent disciplines.

What is more, it seems that many sciences are turning out to become more like computing than computing turning out to be more like them.

Computing’s Ontology (Part I)

One thing that puzzles me are the frequent comments on how computing makes ontology so difficult. Some say it’s virtuality that makes ontology hard. Such comments permeate our field.

Someone posed questions about how does money exist if it’s not “real” anymore but virtual in some cloud service or bank server. Someone else wrote that windows on the screen are at the same time virtual and at the same time physical (or something like that – I can’t remember anymore). Someone else – well, this has been repeated over and over – was of the opinion that ontology of programs is tricky because they’re at the same time abstract (in the same way mathematical objects are abstract) and concrete (they have causal power). One computer scientist said that information (one of our subjects) is neither matter nor energy.

What’s so hard there? Consider the realist ontology: the one underlying scientific realism (the following version is from Searle, who’s a master of putting things clearly):

The world is made of particles in fields of force. Fundamental forces join some particles together to make increasingly complex combinations (molecules, minerals, rocks, mountains). Some of those combinations have organized into living systems, and some of those systems have evolved consciousness. Some forms of consciousness enable intentionality: the capacity to represent objects and states of affairs in the world to oneself.

Now consider virtual currency in this ontology. There’s nothing magical, nothing transcendental about it. Virtual currency exists in the form of magnetic charges on a hard drive platter somewhere. When you buy something from the store, a configuration of magnetic charges somewhere changes.

What about windows on the computer screen? There’s nothing vague about their existence: just dig deep enough into the mechanism. Patterns (electric charges) in graphics memory are translated into signals (streams of electrons) that are sent to the monitor, which translates those signals to voltages in liquid crystal cells, creating a pattern on the grid of liquid crystal cells, and each cell on the grid lets different colors from backlight panel to pass and meet the eye. There’s the window on screen, in very much physical terms. There’s nothing uncertain about “how” or through which mechanisms windows on the screen exist. Only for those who doesn’t know how technology works such things may appear supernatural.

There’s nothing mystical about the ontology of programs, either. A program in an executable form exists as electric and magnetic charges ready for some causal action. A program in a text form exists as black ink entangled in a grid of cellulose fibers. A program in my mind exists as a specific electrochemical and physical configuration in my brain (I guess that the jury is still out on the specifics of that, though). The realist ontology is simple and straightforward.

And the claim that information is neither matter nor energy? Well, I’m no physicist but I first thought that everything’s gotta be matter or energy, though it learned that I was wrong and the case isn’t that clear after all.

Nevertheless, virtual things exist the very same way everything else exists: as patterns of particles in fields of force. There’s nothing ambiguous about their existence. But it’s pretty sure that this ontology doesn’t bring questions to an end. On the contrary, this simple ontology gives rise to a ton of much more interesting questions: Through which mechanisms are ideas shared between individuals? What in the world or in our brain makes it possible that people discover the same theoretical ideas independently? And then agree about those ideas?

Questions like those aren’t easy. But it looks like answers to them can be discovered without postulating extra worlds that science hasn’t yet discovered.

Testing can’t even show the presence of bugs

In the 1968 NATO conference on Software Engineering, Dijkstra criticized testing as a “very inefficient way of convincing oneself of the correctness of programs”. In the next year’s follow-up conference, his objection turned into the famous judgment that “testing shows the presence, not the absence of bugs” (SoC, p.123).

Dijkstra’s statement was analogous to falsificationism in the philosophy of science: an experiment can disprove a theory, but never prove it to be correct.

But although Dijkstra was strong in his conviction, he did not take his argument far enough. In the philosophy of science, the Duhem-Quine thesis notes a weakness in falsificationism: when an experiment yields anomalous results, one can never be sure what caused those anomalous results. It’s not always the case that the theory was wrong: The anomalies could have resulted from anything in the test situation, including the theory, apparatus, theory of how the apparatus should work, or any of the auxiliary hypotheses and theories.

Similar, when program testing yields erroneous results, those erroneous results could result from a bug in the program, a compiler bug, a hardware bug, an operating system bug, a problem with the whole system’s interoperability, a cosmic ray interfering with the circuitry, and so forth. That is, running a perfectly correct program can give erroneous results regardless of the program correctness. Similar, a compiler bug may cancel out a program bug, letting a buggy program pass the test. In other words, testing can’t even show the presence of bugs in a single program. Testing can only show that “something” went wrong.

What is Computer Science?

What would be a better way of starting a new blog on computing than tackling one of computing’s perennial questions: “What is computer science”? ¬†I thought that I’ll give a short¬†answer in a video format.

In this blog, I’ll continue to discuss this topic and related topics to more detail. (The references that I mention in the slides are from the book The Science of Computing: Shaping a Discipline.)