來源:
http://cogsci.uwaterloo.ca/Articles/Pages/comp.phil.sci.html
摘要:
作者認為,計算機可以在科學哲學研究當中作為一種研究方法。
這種研究方法主要是利用電腦語言建立演算模型,
藉由這個模型的運作,討論科學哲學問題。作者舉出三個例子
--建立認知模型、人工智慧、計算理論,
其中都可以使用計算的方法,也就是建立演算模型,
討論哲學問題。
作者:
Paul Thagard
Philosophy Department
University of Waterloo
Waterloo, Ontario, N2L 3G1
pthagard@watarts.uwaterloo.ca
正文:
What do philosophers do? Twenty years ago, one might have heard such answers
to this question as "analyze concepts" or "evaluate arguments". The answer
"write computer programs" would have inspired a blank stare, and even a
decade ago I wrote that computational philosophy of science might sound like
the most self-contradictory enterprise in philosophy since business ethics
(Thagard 1988). But computer use has since become much more common in
philosophy, and computational modeling can be seen as a useful addition to
philosophical method, not as the abandonment of it. I will try in this paper
to summarize how computational models are making substantial contributions to
the philosophy of science.
If philosophy consisted primarily of conceptual analysis, or mental
self-examination, or generation of a priori truths, then computer modeling
would indeed be alien to the enterprise. But I prefer a different picture of
philosophy, as primarily concerned with producing and evaluating theories,
for example theories of knowledge (epistemology), reality (metaphysics), and
right and wrong (ethics). The primary function of a theory of knowledge is to
explain how knowledge grows, which requires both describing the structure of
knowledge and the inferential procedures by which knowledge can be increased.
Although epistemologists often focus on mundane knowledge, the most
impressive knowledge gained by human beings comes through the operation of
science: experimentation, systematic observation, and theorizing concerning
the experimental and observational results. Hence at the core of epistemology
is the need to understand the structure and growth of scientific knowledge, a
project for which computational models can be very useful.
In attempting to understand the structure and development of scientific
knowledge, philosophers of science have traditionally employed a number of
methods such as logical analysis and historical case studies. Computational
modeling provides an additional method that has already advanced
understanding of such traditional problems in the philosophy of science as
theory evaluation and scientific discovery. This paper will review the
progress made on such issues by three distinct computational approaches:
cognitive modeling, engineering artificial intelligence, and theory of
computation.
The aim of cognitive modeling is to simulate aspects of human thinking; for
philosophy of science, this becomes the aim to simulate the thinking that
scientists use in the construction and evaluation of hypotheses. Much
artificial intelligence research, however, is not concerned with modeling
human thinking, but with constructing algorithms that perform well on
difficult tasks independently of whether the algorithms correspond to human
thinking. Similarly, the engineering AI approach to philosophy of science
seeks to develop computational models of discovery and evaluation
independently of questions of human psychology. Computational philosophy of
science has thus developed two streams that reflect the two streams in
artificial intelligence research, one concerned with modeling human
performance and the other with machine intelligence. A third stream of
research uses abstract mathematical analysis and applies the theory of
computation to problems in the philosophy of science.
1. Cognitive Modeling
Cognitive science is the interdisciplinary study of mind, embracing
philosophy, psychology, artificial intelligence, neuroscience, linguistics,
and anthropology. From its modern origins in the 1950s, cognitive science has
primarily worked with the computational-representational understanding of
mind: we can understand human thinking by postulating mental representations
akin to computational data structures and mental procedures akin to
algorithms (Thagard 1996). The cognitive-modeling stream of computational
philosophy of science views topics such as discovery and evaluation as open
to investigation using the same techniques employed in cognitive science. To
understand how scientists discover and evaluate hypotheses, we can develop
computer models that employ data structures and algorithms intended to be
analogous to human mental representations and procedures. The cognitive
modeling stream of computational philosophy of science can be viewed as part
of naturalistic epistemology, which sees the study of knowledge as closely
tied to human psychology, not as an abstract logical exercise.
Discovery
In the 1960s and 1970s, philosophers of science discussed whether there is a
"logic of discovery" and whether discovery (as opposed to evaluation) is a
legitimate topic of philosophical (as opposed to psychological)
investigation. In the 1980s, these debates were superseded by computational
research on discovery that showed how actual cases of scientific discovery
can be modeled algorithmically. Although the models that have been produced
to date clearly fall well short of simulating all the thought processes of
creative scientists, they provide substantial insights into how scientific
thinking can be viewed computationally.
Because of the enormous number of possible solutions involved in any
scientific problem, the algorithms involved in scientific discovery cannot
guarantee that optimal discoveries will be made from input provided. Instead,
computer models of discovery employ heuristics, approximate methods for
attempting to cut through data complexity and find patterns. The pioneering
step in this direction was the BACON project of Pat Langley, Herbert Simon
and their colleagues (Langley et al. 1987). BACON is a program that uses
heuristics to discover mathematical laws from quantitative data, for example
discovering Kepler's third law of planetary motion. Although BACON has been
criticized for assuming an over-simple account of human thinking, Qin and
Simon (1990) found that human subjects could generate laws from numerical
data in ways quite similar to BACON.
Scientific discovery produces qualitative as well as quantitative laws.
Kulkarni and Simon (1988) produced a computational model of Krebs' discovery
of the urea cycle. Their program, KEKADA, reacts to anomalies, formulates
explanations, and carries out simulated experiments in much the way described
in Hans Krebs laboratory notebooks.
Not all scientific discoveries are as data-driven as the ones so far
discussed. They often involve the generation of new concepts and hypotheses
that are intended to refer to non-observable entities. Thagard (1988)
developed computational models of conceptual combination, in which new
theoretical concepts such as sound wave are generated, and of abduction, in
which new hypotheses are generated to explain puzzling phenomena. Darden
(1990, this volume) has investigated computationally how theories that have
empirical problems can be repaired.
One of the most important cognitive mechanisms for discovery is analogy,
since scientists often make discoveries by adapting previous knowledge to a
new problem. Analogy played a role in some of the most important discoveries
ever made, such as Darwin's theory of evolution and Maxwell's theory of
electromagnetism. During the 1980s, the study of analogy went well beyond
previous philosophical accounts through the development of powerful
computational models of how analogs are retrieved from memory and mapped to
current problems to provide solutions. Falkenhainer, Forbus, and Gentner
(1989) produced SME, the Structure Mapping Engine, and this program was used
to model analogical explanations of evaporation and osmosis (Falkenhainer
1990). Holyoak and Thagard (1989) used different computational methods to
produce ACME, the Analogical Constraint Mapping Engine, which was generalized
into a theory of analogical thinking that applies to scientific as well as
everyday thinking (Holyoak and Thagard 1995).
Space does not permit further discussion of computational models of human
discovery, but the above research projects illustrate how thought processes
such as those involved in numerical law generation, theoretical concept
formation, and analogy can be understood computationally. Examples of
non-psychological investigations of scientific discovery are described in
section 2 and 3.
Evaluation
略
2. Engineering AI
As the references to my own work in the last section indicate, I pursue the
cognitive modeling approach to computational philosophy of science, allying
philosophy of science with cognitive science and naturalistic epistemology.
But much valuable work in AI and philosophy has been done that makes no
claims to psychological plausibility. One can set out to build a scientist
without trying to reverse engineer a human scientist. The engineering AI
approach to computational philosophy of science is allied, not with
naturalistic, psychologistic epistemology, but with what has been called
"android epistemology", the epistemology of machines that may or may not be
built like humans (Ford, Glymour, and Hayes 1995). This approach is
particularly useful when it exploits such differences between digital
computers and humans as computers' capacity for very fast searches to perform
tasks that human scientists cannot do very well.
Discovery
One goal of engineering AI is to produce programs that can make discoveries
that have eluded humans. Bruce Buchanan, who was originally trained as a
philosopher before moving into AI research, reviewed over a dozen AI programs
that formulate hypotheses to explain empirical data (Buchanan 1983). One of
the earliest and most impressive programs was DENDRAL which performed
chemical analysis. Given spectroscopic data from an unknown organic chemical
sample, it determined the molecular structure of the sample (Lindsay et. al.
1980). The program META-DENDRAL pushed the discovery task one step farther
back: given a collection of analytic data from a mass spectrometer, it
discovered rules explaining the fragmentation behavior of chemical samples. A
more recent program for chemical discovery is MECHEM, which automates the
task of finding mechanism for chemical reactions: given experimental evidence
about a reaction, the program searches for the simplest mechanism consistent
with theory and experiment (Valdes-Peres, 1994).
Discovery programs have also been written for problems in biology, physics,
and other scientific domains. In order to model biologists' discoveries
concerning gene regulation in bacteria, Karp (1990) wrote a pair of programs,
GENSIM and HYPGENE. GENSIM was used to represent a theory of bacterial gene
regulation, and HYPGENE formulates hypotheses that improve the predictive
power of GENSIM theories given experimental data. More recently, he has
shifted from modeling historical discoveries to the attempt to write programs
that make original discoveries from large scientific databases such as ones
containing information about enzymes, proteins, and metabolic pathways (Karp
and Mavrovouniotis 1994). Cheeseman (1990) used a program that applied
Bayesian probability theory to discover previously unsuspected fine structure
in the infrared spectra of stars. Machine learning techniques are also
relevant to social science research, particularly the problem of inferring
causal models from social data. The TETRAD program looks at statistical data
in fields such as industrial development and voting behavior and builds
causal models in the form of a directed graph of hypothetical causal
relationships (Glymour et al., 1987).
One of the fastest growing areas of artificial intelligence is "data mining",
in which machine learning techniques are used to discover regularities in
large computer data bases such as the terabytes of image data collected by
astronomical surveys (Fayyad, Piatetsky-Shapiro, and Smyth 1996). Data mining
is being applied with commercial success by companies that wish to learn more
about their operations, and similar machine learning techniques may have
applications to large scientific data bases such as those being produced by
the human genome project.
Evaluation
略
3. Theory of Computation
Both the cognitive modeling and engineering AI approaches to philosophy of
science involve writing and experimenting with running computer programs. But
it is also possible to take a more theoretical approach to computational
issues in the philosophy of science, exploiting results in the theory of
computation to reach conclusions about processes of discovery and evaluation.
Discovery
Scientific discovery can be viewed as a problem in formal learning theory, in
which the goal is to identify a language given a string of inputs (Gold
1968). Analogously, a scientist can be thought of as a function that takes as
input a sequence of formulas representing observations of the environment and
produces as output a set of formulas that represent the structure of the
world (Kelly 1995, Kelly and Glymour 1989, Osherson and Weinstein 1989).
Although formal learning theory has produced some interesting theorems, they
are limited in their relevance to the philosophy of science in several
respects. Formal learning theory assumes a fixed language and therefore
ignores the conceptual and terminological creativity that is important to
scientific development. In addition, formal learning theory tends to view
hypotheses produced as a function of input data, rather than as a much more
complex function of the data and the background concepts and theories
possessed by a scientist. Formal learning theory also overemphasizes the goal
of science to produce true descriptions, neglecting the important role of
explanatory theories and hypothetical entities in scientific progress.
Evaluation
4. What Computation Adds to Philosophy of Science
Almost twenty years ago, Aaron Sloman (1978) published an audacious book, The
Computer Revolution in Philosophy, which predicted that within a few years
any philosopher not familiar with the main developments of artificial
intelligence could fairly be accused of professional incompetence. Since
then, computational ideas have had a substantial impact on the philosophy of
mind, but a much smaller impact on epistemology and philosophy of science.
Why? One reason, I conjecture, is the kind of training that most philosophers
have, which includes little preparation for actually doing computational
work. Philosophers of mind have often been able to learn enough about
artificial intelligence to discuss it, but for epistemology and philosophy of
science it is much more useful to perform computations rather than just to
talk about them. To conclude this review, I shall attempt to summarize what
is gained by adding computational modelling to the philosophical tool kit.
Bringing artificial intelligence into philosophy of science introduces new
conceptual resources for dealing with the structure and growth of scientific
knowledge. Instead of being restricted to the usual representational schemes
based on formal logic and ordinary language, computational approaches to the
structure of scientific knowledge can include many useful representations
such as prototypical concepts, concept hierarchies, production rules, causal
networks, mental images, and so on. Philosophers concerned with the growth of
scientific knowledge from a computational perspective can go beyond the
narrow resources of inductive logic to consider algorithms for generating
numerical laws, discovering causal networks, forming concepts and hypotheses,
and evaluating competing explanatory theories.
In addition to the new conceptual resources that AI brings to philosophy of
science, it also brings a new methodology involving the construction and
testing of computational models. This methodology typically has numerous
advantages over pencil-and-paper constructions. First, it requires
considerable precision, in that to produce a running program the structures
and algorithms postulated as part of scientific cognition need to be
specified. Second, getting a program to run provides a test of the
feasibility of its assumptions about the structure and processes of
scientific development. Contrary to the popular view that clever programmers
can get a program to do whatever they want, producing a program that mimics
aspects of scientific cognition is often very challenging, and production of
a program provides a minimal test of computational feasibility. Moreover, the
program can then be used for testing the underlying theoretical ideas by
examining how well the program works on numerous examples of different kinds.
Comparative evaluation becomes possible when different programs accomplish a
task in different ways: running the programs on the same data allows
evaluation of their computational models and background theoretical ideas.
Third, if the program is intended as part of a cognitive model, it can be
assessed concerning how well it models human thinking.
The assessment of cognitive models can address questions such as the
following:
1. Genuineness. Is the model a genuine instantiation of the theoretical ideas
about the structure and growth of scientific knowledge, and is the program a
genuine implementation of the model?
2. Breadth of application. Does the model apply to lots of different
examples, not just a few that have been cooked up to make the program work?
3. Scaling. Does the model scale up to examples that are considerably larger
and more complex than the ones to which it has been applied?
4. Qualitative fit. Does the computational model perform the same kinds of
tasks that people do in approximately the same way?
5. Quantitative fit. Can the computational model simulate quantitative
aspects of psychological experiments, e.g. ease of recall and mapping in
analogy problems?
6. Compatibility. Does the computational model simulate representations and
processes that are compatible with those found in theoretical accounts and
computational models of other kinds of cognition?
Computational models of the thought processes of sciences that satisfy these
criteria have the potential to greatly increase our understanding of the
scientific mind. Engineering AI need not address questions of qualitative and
quantitative fit with the results of psychological experiments, but should
employ the other four standards of assessment.
There are numerous issues connecting computation and the philosophy of
science that I have not touched on in this review. Computer science can
itself be a subject of philosophical investigation, and some work has been
done discussing epistemological issues that arise on computer research (see
e.g. Fetzer, this volume; Thagard, 1993). In particular, the philosophy of
artificial intelligence and cognitive science are fertile areas of philosophy
of science. My concern has been more narrow, with how computational models
can contribute to philosophy of science. I conclude with a list of open
problems that seem amenable to computational/philosophical investigation:
1. In scientific discovery, how are new questions generated? Formulating a
useful question such as "How might species evolve?" or "Why do the planets
revolve around the sun?" is often a prerequisite to more data-driven and
focused processes of scientific discovery, but no computational account of
scientific question generation has yet been given.
2. What role does visual imagery play in the structure and growth of
scientific knowledge? Although various philosophers, historians, and
psychologists have documented the importance of visual representations in
scientific thought, existing computational techniques have not been well
suited for providing detailed models of the cognitive role of pictorial
mental images (see e.g. Shelley 1996).
3. How is consensus formed in science? All the computational models discusses
in this paper have concerned the thinking of individual scientists, but it
might also be possible to develop models of social processes such as
consensus formation along the lines of the field known as distributed
artificial intelligence which considers the potential interactions of
multiple intelligent agents (Thagard 1993).
Perhaps problems such as these will, like other issues concerning discovery
and evaluation, yield to computational approaches that involve cognitive
modeling, engineering AI, and the theory of computation.
--
※ 發信站: 批踢踢實業坊(ptt.cc)
◆ From: 140.112.5.64
※ 編輯: popandy 來自: 140.112.5.64 (08/31 22:02)