http://prime.gushi.org/~kd/Professional%20Web%20page/philcomputing.html
Philosophy of computing ...
Philosophy of computing is the philosophy of computer science, computer
engineering, computer use, and related fields.
I will expand this as necessary into parts:
Logic
Metaphysics
Epistemology
Ethics / political philosophy
Aestethics
(Yes, I mean to suggest that virtually every traditional branch of philosophy
is included!) Please also read my papers on various aspects of these issues,
as the below is a very unfinished sketch of the above topics.
Logic
The mathematical theory of computability arose out of a question in logic and
the first mathematical model of a computer arose out of this question. This
part of the history of computing is well known and well appreciated. What is
sometimes forgotten is that Turing's support for his model of a computer was
from the activity of an idealized human clerk, not any sort of machine at
all. This should not be terribly surprising as he wrote his famous paper in
1936, before there were any computing machines in the modern sense at all. (I
will discuss later on what a computing machine might be.)
The question in logic which Turing answered was the so-called "decision
problem" in first order logic.
Is there a uniform procedure for deciding the validity of all sentences in
first order logic?
The answer (assuming the analysis Turing gave of procedures and methods for
solving such a question is correct) is no. His result is thus a triumph of
mathematical logic. A whole branch of related fields of logic grew up around
this work of his (not neglecting his teacher Church, Post, Godel, Kleene and
many others). Today computability theory is an important sister to logic,
studied not only for its own sake, but for its applications in computing
fields. All these technical resources a philosopher of computing should study
carefully. (As I have begun to do so.)
But there are other parts to logic that are important for philosophy of
computing as well. For example, non-classical logics are used in computer
science not as aids to formalizing deduction proper, but as important
branches of mathematics in their own right. Philosophers (e.g. C. I. Lewis)
and mathematicians (e.g. Post) invented systems of logic for use in logic;
today computer scientists use modal logics to discuss properties of
compilers, and multivalued logic in the theory of databases. No strange
heterogeneity arises as one might think. These logics instead are "logics by
analogy" - they are boolean algebras (or generalizations of same) used for
specific purposes.
Another use of logic in computing is making sure we have the right notion of
a function. In mainstream mathematics functions (and indeed most objects) are
taken extensionally. Thus f(x) = x * x + 2 is regarded as the same function
as g(y) = y^2 + 2. But in computer science this may not be a good idea, at
least not in all contexts:
Consider the following two Scheme programs to calculate the functions
(define f (lambda (x) (+ (* x x) 2)))
and
(define g (lambda (x) (+ (pow x 2) 2)))
These (of course) return the same values given the same inputs. But g and f
have a computationally important difference. g requires an additional
function call every time it is invoked. The computational overhead for g is
thus higher than it is for f, at least assuming a lack of optimization. Thus
in a certain sense the effects of each function taken as a process are wildly
different. It is this metaphysical difference (of which more below) that has
lead computer scientists to taking an intentional view of functions. In fact,
it is not surprising that the Scheme language (like all in the LISP family)
uses a syntax that involves the lambda calculus.
Metaphysics
It isn't too surprising given the above that the metaphysics of computing
center around the Turing machine. A Turing machine represents a computer as
being a finite state, unbounded memory machine. This is an interesting
choice, given there are prima facie reasons to suppose that the universe is
continuous. On the other hand, no computer we have ever made is really
unbounded in memory. So, the metaphysics of computing involves debating such
questions as these. But these questions are just a tip of a very large
iceberg. Other questions of interest are: what is the correct
characterization of event, as usable in the theory of automata? What makes
something programmable? Are there computers which aren't programmable, or
does putting the slide rule and the G3 on my desk in the same category
misleading somehow? What about us? We compute; in fact Turing's original
computers were people - as at that time, a computer was someone employed to
perform calculations.
The metaphysical questions in computing flow the other way, as well. For
example, many recent debates in the philosophy of mind center around the
question of the applicability of computational concepts to understanding our
mental functioning. Another related debate is whether the construction of a
computational device that has mental qualities is possible. These two are
often conflated under the thesis of artificial intelligence, when strictly
speaking, the two are independent. For example, it could be the case that
although human intelligence is not computational in character, some other
might be. Or, conversely, it might be that although we are computational, it
might not be possible to construct an AI out of conventional computer
hardware and software at all.
Another series of metaphysical questions include some mereological ones. Does
software come in parts? Can we speak of parts functionally: for example, the
part of Dreamweaver that handles the scrollbar to the right of this window.
How do those relate to portions of source code? At the time of writing, there
are debates over whether IBM illegally put SCO owned code into Linux. Can
these issues be cached out in mereological terms?
For example:
#include <stdio.h>
void main (void)
{
printf ("Hello, world\n");
}
So, what parts of code are relevant there? Do I count the header in my code
and the library it supports, or just the header? or not at all? Should the
code have some sort of atomic function? How are those individuated? Consider:
while ((c = getchar ()) != EOF) { ... }
vs.
c = getchar ();
while (c != EOF) { ... ; c = getchar (); }
Are those the same code from a mereological perspective? What about post
increment vs. preincrement? Should it be the assembly language that
determines similarity? If so, we eliminate the above example but create
others. After all, reorganization of instruction flow occurs in optimizing
compilers and in CPUs themselves ...
Epistemology
As one might suppose there are two aspects to the epistemology of computing.
Computational ideas have entered as a constraint on understanding how we
know. Learning schemes that are either uncomputable or require algorithms
that are too complex are regarded as implausible. Computational properties
are attributed to sensory systems. And people debate the merits of these
proposals. Some go so far as to suggest that entire computational fields can
rework how we think about truth, induction, and much else and not just focus
our inquiry. Kevin Kelly advocates a form of computational learning theory to
address traditional epistemological questions. Paul and Patricia Churchland
advocate an interesting form of the thesis that the brain is a biological
computer - but not a von Neumann machine (a metaphysical issue!) but one with
important epistemological twists. Paul is not a Rortyian subjectivist, but he
suggests that computational (and neuroscientific - one of my other interests)
ideas lead us to the conclusion that truth, while still epistemically
important, is in some sense a derivative, not a basic notion. Yet the
underlying notions are epistemological in character, unlike the satisfaction
relation introduced in model theory by Tarski. Curiously enough, the insights
here can be arrived at from other routes. Mario Bunge, while not a
computationalist (except in the anti- sense!) also emphasizes the literal
partial nature of truth in a remarkably similar way.
Enough of a taste of one side of the new "computational turn" in
epistemology. The other side concerns how we come to know about computational
artifacts and computational processes and such like. For example:
These often shade into ethical questions: given that the number of
computationally useful states accessible to even the smallest electronic
computer is larger than we could possibly exhaustively test, how do we go
about investigating the reliability of such a device? A wonderful question in
the philosophy of engineering which illustrates how ethical and
epistemological questions are never far off even in (say) computer
architecture.
How can we make programs that are provably correct to their specification and
yet useful?
Are there reliable (if not infallible) means of detecting dead code? (Here
again a metaphysical presupposition enters: if you believe something like the
"Church-Turing Thesis" then you are logically committed to believing that the
problem of detecting dead code is nomologically impossible in general!)
Given that the Turing machine is a model of a machine with an unbounded
amount of memory, and given that any device we build is a circumscribed,
bounded, machine, why do we use this model (rather than a finite automaton)
to understand our electronic computers? (I for one suspect the notion of
programmability, a very messy and metaphysically interesting notion, is
central. But I have only begun to investigate this.)
Ethics
Philosophy of computing probably connotes ethical issues to a lot of people.
The field of computing ethics is well developed, however, as usual, this is
only one part of the picture. An integrated philosophy of computing not only
reflects on matters such as privacy, access to technology, intellectual
property considerations, and other ethical topics that computing might focus
concern on, but also on two other topics. One of these is using computing to
better understand ethics. While this is largely limited to teaching ethics
and developing infrastructure for political philosophy, there is room for
descriptive ethics mediated by computer investigation, as well as
computational modeling of ethical situations, agents, etc. (This is often
done through game theory: I am skeptical of methods here, but not goals.)
There is also the interesting question, raised by my former colleagues Kari
Coleman, Peter Danielson and others concerning to what degree one can
attribute moral responsibility to computers, robots, and other computational
artifacts, as well as the more usual question of building in moral
functioning of said artifacts. The two are two halves of one coin.
Aesthetics
Aesthetics is the study of the beautiful, the sublime, and various related
terms and their duals. It too belongs in a philosophy of computing. As a long
time user of Apple's computers and operating systems, I have also been a long
time advocate of the importance of an aesthetic dimension to technology and
artifacts. The issues here are subtle: other than just making life pleasant,
do aesthetic values play a role in computing: the answer to this is yes,
though explaining why is difficult. I have written a paper about this topic.
Are there other aesthetic questions in computing? Yes:
What does a programmer mean when they suggest that a piece of code is neat,
beautiful, etc.?
Is elegance the same as the above?
Should we be willing to pay a premium to have a more pleasant experience
using artifacts, or is a minimum standard of experience required?
Do the controversies over interaction methods with computers and their
interfaces reflect subjective preferences or underlying cognitive differences?
--
※ 發信站: 批踢踢實業坊(ptt.cc)
◆ From: 140.112.25.194