这是用户在 2024-10-24 11:38 为 https://app.immersivetranslate.com/word/ 保存的双语快照页面,由 沉浸式翻译 提供双语支持。了解如何保存?

机械思维

人类的思想如何 代表外部世界? 什么是思想, 科学地研究吗? 心灵看作一种机器有帮助吗


蒂姆·克兰 (Tim Crane ) 生动直接的方式回答此类问题,假设他没有哲学或相关知识的先验知识学科。 1995首次出版以来,The Mechanical Mind已经成千上万的人介绍了当代心灵哲学蒂姆·克兰 (Tim Crane) 解释了跨越心灵哲学人工智能认知科学的一些基本概念什么是身心问题;什么是计算机及其工作原理;什么是思想 以及 计算机心灵可能具有他们。 他研究了二元论消除论的不同心智模型,并质疑是否可以没有语言就有思想,以及心灵是否受制于与自然现象相同的因果法则。其结果是对围绕思想和表征概念的理论和论点的迷人探索。


此版本已经过全面修订和更新,包括关于意识的新章节和关于模块化和进化心理学的新章节。中还有进一步阅读指南、年表和新的术语表,如 Mentalesecon- con- con- nrelationism 和 intentionality普通读者和学生以及任何对我们的思维机制感兴趣的人都可以阅读机械思维》。

蒂姆·克兰(Tim Crane是伦敦大学学院(University College London)哲学教授,也是伦敦大学高级研究学院(School of Advanced Study, London)哲学项目主任。他是 Elements of Mind 的作者和 The Contents of Experience 的编辑

但是,灵魂是怎么理解的,又是用什么艺术来读到物质的这种变化中风 ...... 表示这样的 对象? 我们在胚胎状态下学会了这样的字母表吗?为什么我们 不知道 任何这种先天性的担忧呢? . . . . 通过运动 多样性我们应该 拼出 数字、距离、大小、颜色、与它们不同的事物,我们归因于一些秘密的推论。

机械思维
机械思维

心智、机器和心理表征
心智、机器和心理表征哲学介绍
哲学介绍

第二
第二

蒂姆·克兰
蒂姆·克兰

1995 Penguin Books
1995PenguinBooks 首次出版
首次出版 ...


第二Routledge2003年出版

11 New Fetter Lane, 伦敦 EC4P 4EE
11NewFetterLane,伦敦EC4P4EE

Routledge
Routledge 在美国和加拿大同时出版
在美国加拿大同时出版

29 West 35th Street, 纽约 NY 10001
29West35thStreet,纽约NY10001

Routledge Taylor n Francis Group
RoutledgeTaylornFrancisGroup 印记
印记

此版本在泰勒和弗朗西斯电子图书馆出版,2003年。
此版本在泰勒和弗朗西斯电子图书馆出版,2003年。

© 1995 年、2003 年蒂姆·克兰
©1995 年、2003 年蒂姆·克兰

保留所有权利 不得 任何形式转载复制或使用 本书 的任何部分 或通过现在已知以后发明的任何电子、 机械或其他 方式包括 未经出版商书面许可,影印和录制,或在信息存储或检索系统中。
保留所有权利不得任何形式或通过任何电子、机械或其他方式无论是现在已知的还是以后发明重印复制或使用本书的任何部分包括未经出版商书面许可,影印和录制,或在信息存储或检索系统中。

大英图书馆出版物数据
大英图书馆出版物数据

本书的目录记录可从大英图书馆
本书的目录记录可从大英图书馆获得
获得

美国国会图书馆 出版物数据编目
美国国会图书馆 出版物数据编目 请求本书目录记录
请求本书 目录记录

ISBN 0-203-42631-2 主电子书 ISBN
ISBN 0-203-42631-2 主电子书 ISBN

ISBN 0-203-43982-1 (Adobe eReader 格式)
ISBN 0-203-43982-1 (Adobe eReader 格式)

国际标准书号0-415-29030-9 (hbk)
书号:ISBN0-415-29030-9(hbk)

ISBN 0-415-29031-7 (pbk)
国际标准书号0-415-29031-7(pbk)

图表
图表列表
列表

第一序言 第二
第一序言第二序言
序言


简介: 机械思维
简介:机械心智

1

2

心灵
心灵

5

1 表征
1表征之谜

8

表示
表示的概念
的概念

11

图片相似之处
图片相似之处

13

Linguistic representation
语言表示

20

Mental representation
心理表征

22

Thought and consciousness
思想意识

26


意向

30

Brentano’s thesis
布伦塔诺的论文

36

Conclusion: from representation to the mind
结论:再现心灵

40

Further reading
延伸阅读

41

2 Understanding thinkers and their thoughts
2了解思想家他们的想法

42

The mind–body problem
身心问题

43

Understanding other minds
理解他人的思想

47

The causal picture of thoughts
思想因果

54

Common-sense psychology
常识心理学

62

The science of thought: elimination or vindication?
思想的科学消除还是平反?

70

Theory versus simulation
理论模拟

77

Conclusion: from representation to computation ...

80

Further reading ...

81

v

Contents

Computers and thought83 ...

Asking the right questions83 ...

Computation, functions and algorithms85 ...

Turing machines92 ...

Coding and symbols99 ...

Instantiating a function and computing a function102 ...

Automatic algorithms104 ...

Thinking computers?109 ...

Artificial intelligence114 ...

Can thinking be captured by rules and representations?118 ...

The Chinese room123 ...

Conclusion: can a computer think?128 ...

Further reading129 ...

The mechanisms of thought130 ...

Cognition, computation and functionalism131 ...

The language of thought134 ...

Syntax and semantics137 ...

The argument for the language of thought140 ...

The modularity of mind148 ...

Problems for the language of thought154 ...

‘Brainy’ computers159 ...

Conclusion: does computation explain representation?167 ...

Further reading167 ...

Explaining mental representation169 ...

Reduction and definition169 ...

Conceptual and naturalistic definitions172 ...

Causal theories of mental representation175 ...

The problem of error178 ...

Mental representation and success in action185 ...

Mental representation and biological function189 ...

Evolution and the mind194 ...

Against reduction and definition200 ...

Conclusion: can representation be reductively explained?208 ...

Further reading209 ...

Consciousness and the mechanical mind211 ...

The story so far211 ...

Consciousness, ‘what it’s like’ and qualia215 ...

Consciousness and physicalism219 ...

The limits of scientific knowledge227 ...

Conclusion: what do the problems of consciousness ...

tell us about the mechanical mind?230 ...

Further reading231 ...

Glossary233 ...

The mechanical mind: a chronology238 ...

Notes242 ...

Index256 ...

vii


数字

Old man with a stick18 ...

Flow chart for the multiplication algorithm89 ...

A flow chart for boiling an egg91 ...

A machine table for a simple Turing machine95 ...

Mousetrap ‘black box’105 ...

The mousetrap’s innards105 ...

Multiplier black box106 ...

Flow chart for the multiplication algorithm again107 ...

An ‘and-gate’113 ...

Mach bands149 ...

Diagram of a connectionist network161 ...

Cummins’s ‘Tower Bridge’ picture of computation204 ...

viii

Preface to the first edition ...

This book is an introduction to some of the main preoccupations of contemporary philosophy of mind. There are many ways to write an introductory book. Rather than giving an even-handed description of all recent philosophical theories of the mind, I decided instead to follow through a line of thought which captures the essence of what seem to me the most interesting contemporary debates. Central to this line of thought is the problem of mental representation: how can the mind represent the world? This problem is the thread that binds the chapters together, and around this thread are woven the other main themes of the book: the nature of everyday psychological explanation, the causal nature of the mind, the mind as a computer and the reduction of mental content. ...

Although there is a continuous line of argument, I have tried to construct the book so that (to some extent) the chapters can be read independently of each other. So Chapter 1 introduces the puz- zle of representation and discusses pictorial, linguistic and mental representation. Chapter 2 is about the nature of common-sense (so-called ‘folk’) psychology and the causal nature of thoughts. Chapter 3 addresses the question of whether computers can think, and Chapter 4 asks whether our minds are computers in any sense. The final chapter discusses theories of mental representation and the brief epilogue raises some sceptical doubts about the limitations of the mechanical view of the mind. So those who are interested in the question of whether the mind is a computer could read Chapters 3 and 4 independently of the rest of the book. And those who are more interested in the more purely ‘philosophical’ problems might wish to read Chapters 1 and 2 separately. I have tried to indicate where the discussion gets more complicated, and which sections a beginner might like to skip. In general, though, Chapters 4 and 5 are heavier going than Chapters 1–3. ...

ix

Preface to the first edition ...

At the end of each chapter, I have given suggestions for further reading. More detailed references are given in the endnotes, which are intended only for the student who wishes to follow up the debate – no-one needs to read the endnotes in order to understand the book. ...

I have presented most of the material in this book in lectures and seminars at University College London over the last few years, and I am very grateful to my students for their reactions. I am also grate- ful to audiences at the Universities of Bristol, Kent and Nottingham, where earlier versions of Chapters 3 and 4 were presented as lectures. I would like to thank Stefan McGrath for his invaluable editorial advice, Caroline Cox, Stephen Cox, Virginia Cox, Petr Kolár˘, Ondrej Majer, Michael Ratledge and Vladimír Svoboda for their helpful comments on earlier versions of some chapters, Roger Bowdler for the drawings and Ted Honderich for his generous en- couragement at an early stage. I owe a special debt to my colleagues Mike Martin, Greg McCulloch, Scott Sturgeon and Jonathan Wolff for their detailed and perceptive comments on the penultimate draft of the whole book, which resulted in substantial revisions and saved me from many errors. This penultimate draft was written in Prague, while I was a guest of the Department of Logic of the Czech Academy of Sciences. My warmest thanks go the members of the Department – Petr Kolár˘, Pavel Materna, Ondrej Majer and Vladimír Svoboda, as well as Marie Duz˘i – for their kind hospitality. ...

University College London ...

November 1994 ...

x

Preface to the second edition ...

The main changes that I have made for this second edition are the replacement of the epilogue with a new chapter on conscious- ness, the addition of new sections on modularity and evolutionary psychology to Chapters 4 and 5, and the addition of the Glossary and Chronology at the end of the book. I have also corrected many stylistic and philosophical errors and updated the Further reading sections. My views on intentionality have changed in certain ways since I wrote this book. I now adopt an intentionalist approach to all mental phenomena, as outlined in my 2001 book, Elements of Mind (Oxford University Press). But I have resisted the temptation to alter significantly the exposition in Chapter 1, except where that exposition involved real errors. ...

I am very grateful to Tony Bruce for his enthusiastic support for a new edition of this book, to a number of anonymous reports from Routledge’s readers for their excellent advice, and to Ned Block, Katalin Farkas, Hugh Mellor and Huw Price for their detailed critical comments on the first edition. ...

University College London ...

August 2002 ...

xi

To my parents ...

Introduction
介绍

The mechanical mind ...

A friend remarked that calling this book The Mechanical Mind is a bit like calling a murder mystery The Butler Did It. It would be a shame if the title did have this connotation, because the aim of the book is essentially to raise and examine problems rather than solve them. In broad outline, I try to do two things in this book: first, to explain the philosophical problem of mental representation; and, second, to examine the questions about the mind which arise when attempting to solve this problem in the light of dominant philosophical assumptions. Central among these assumptions is the view I call ‘the mechanical mind’. Roughly, this is the view that the mind should be thought of as a kind of causal mechanism, a natural phenomenon which behaves in a regular, systematic way, like the liver or the heart. ...

In the first chapter, I introduce the philosophical problem of mental representation. This problem is easily stated: how can the mind represent anything? My belief, for example, that Nixon visited China is about Nixon and China – but how can a state of my mind be ‘about’ Nixon or China? How can my state of mind direct itself on Nixon and China? What is it for a mind to represent anything at all? For that matter, what is it for anything (whether a mind or not) to represent anything else? ...

This problem, which some contemporary philosophers call ‘the problem of intentionality’, has ancient origins. But recent develop- ments in philosophy of mind – together with developments in the related disciplines of linguistics, psychology and artificial intelli- gence have raised the old problem in a new way. So, for instance, the question of whether a computer could think is now recognised to be closely tied up with the problem of intentionality. And the same is true of the question of whether there can be a ‘science of thought’: can the mind be explained by science, or does it need its ...

1

Introduction


自己独特的、非科学解释模式 正如我们将看到的,这个问题的完整答案取决于心理表征的性质。


最近试图回答这类问题的基础是我所说的机械的心智观。表示被认为是一个问题因为很难理解一个单纯的机制如何代表 世界——机制的状态 如何“延伸到外部”,以及自己引导世界上。导言的目的是通过概述这个想法的起源,更多地理解我谈论机械思维时的含义。


机械世界图景


心灵是一种自然机制的想法将自然本身视为一种机制。因此,要理解这种 看待 心灵的方式,我们需要理解 非常 笼统地 – 这种看待自然的方式。


现代西方 的世界可以追溯到 17 世纪的“科学革命”,以及伽利略、弗朗西斯·培根、笛卡尔牛顿的思想。世纪和 文艺复兴时期, 人们一直 有机的术语来思考世界。地球本身被认为是一种有机体,正如列奥纳多·达·芬奇 (Leonardo da Vinci) 的这段话生动地说明的那样:


我们可以说,地球有一个植物人的灵魂,它的肉体是土地,它的骨头是岩石的结构,它的血液是水池......它的呼吸和脉搏是大海的潮起潮落。
1


我们可以称之为这种有机的世界图景在很大程度上归功亚里士多德的著作,这位哲学家中世纪和文艺复兴时期的思想产生了最大的影响。(事实上,他的影响力如此之大,以至于他经常被称为“哲学家”。在亚里士多德的世界体系中,万物都有其自然的“位置”或条件,事物之所以这样做,是因为实现其自然状态它们的本质适用

2


对于无机物和有机物来说,石头掉到地上是因为它们的自然位置是在地上,火升到它在 天空中的自然位置等等 宇宙中的一切都被视为有其最终的终点或目标,这种观点与宇宙的最终驱动力是上帝的概念完全一致。


17 世纪,这一切都 开始 分崩离一个重要的变化是,亚里士多德的解释方法——最终目的“性质”而言——机械或机械论的解释方法所取代——就物质运动中的规律性、决定性行为而言了解世界的方法不是通过研究和解释亚里士多德的著作,而是通过观察实验,以及 自然的数量相互作用科学地理解世界,使用数学测量法是新的“机械世界图景”的关键要素之一。伽利略有一句名言:


这是一本伟大的宇宙之书,除非你首先来理解这种语言,并阅读构成它的字母表,否则就无法理解它。它是用数学语言写成的,它的字符是三角形、圆圈和其他几何图形,没有它们,人类就不可能理解它的一个字。
2


世界的行为可以用精确的数学方程式或自然法则来衡量和理解,这是我们所知道的物理发展的核心 今天。地说 根据机械的世界图景,事物之所以做它们所做的事情不是因为它们试图达到它们的自然位置或最终目的,也不是因为它们服从上帝的旨意,而是因为它们被驱使进入某些方式符合自然法则


一般的术语来说,这就是机械自然观含义。当然,“机械”一词过去(有时仍然如此被理解具体的东西


例如机械系统被认为是仅在接触确定性上相互作用的系统后来的科学发展——例如牛顿的物理学,假设引力 显然作用处,或者发现基本物理过程不是确定性的——驳斥机械这个特定意义上的世界图景但是这些发现当然不会破坏一个根据自然法则规律运作的原因世界的一般图景;而这个更一般的概念就是我在本书中所说的“机械”的意思。


在中世纪和文艺复兴时期的“有机”世界图景中,无机物是按照有机物的思路构思的。一切都有其自然的位置,适合于世界“动物”的和谐运作。但是在机械世界图景中,情况却相反了:有机物是按照无机物的思路来思考的。一切事物,无论是有机的还是无机的,都之所以能做到,是因为它是由其他事物引起的,符合可以精确、数学地制定的原则。勒内·笛卡尔(René Descartes,1596-1650 年)以认为非人类动物是机器而闻名,没有任何意识或心理:他认为动物的行为可以完全机械地解释。随着机械世界图景的发展,手表而不是动物成为了一个占主导地位的隐喻。正如 18 世纪机械心智观的先驱朱利安·德·拉梅特里 (Julien de La Mettrie) 所写的那样:“身体只是一个守望者,人只是一组相互缠绕的弹簧”。
3


因此,直到本世纪中叶, 机械世界图景一大谜团 生命本身本质,这并不奇怪。 许多人 认为原则 可以 找到生命的机械解释——托马斯·霍布斯 (Thomas Hobbes 1651 年自信地断言“生命不过四肢的运动——唯一的问题是找到它。渐渐地,越来越多的生命如何是一个纯粹机械过程有了更深入的了解最终 1953 克里克发现了DNA的结构。现在,生物体的 自我繁殖能力似乎原则上可以用化学术语来解释 有机可以用无机物来解释。


心灵


这离开了我的心灵哪里呢?尽管笛卡尔完全愿意将动物视为纯粹机器,但对人类的思想却没有这样做:尽管他确实认为思想(或灵魂) 物理世界中有影响,但他将其置于物质 机械宇宙之外 但是,在后来的几个世纪,许多机械论哲学家 无法接受笛卡尔这种特殊观点,因此他们在解释心灵在自然界中的位置方面面临着最大的挑战 机械世界 图景下的一个团是 机械术语 心灵的解释


就像 对生命的 机械解释一样许多人 认为 这样一种心灵的解释 这种观点特别好的例子 可以在 18 世纪 19 世纪唯物主义者口号中找到梅特里 (La Mettrie) 的精彩评论大脑思考的肌肉因为走路的肌肉“,或者生理学家卡尔·沃格特 (Karl Vogt) 的口号”大脑分泌思想就像肝脏分泌汁一样”。这些当然是唯物主义宣言,而不是理论。


那么,对心灵的机械解释会是什么样子呢?过去四十年哲学中一个有影响力的想法是,解释心灵将涉及证明它实际上只是物质。精神状态实际上只是大脑的化学状态。这种唯物主义(或“物理主义”)观点通常取决于这样一种假设,即完全解释某事物最终就是用物理科学来解释它(关于这种观点,将在第 6 章中详细讨论)。也就是说,物理学以外的科学必须有物理学证明其科学证书——所有科学都必须归结为物理学。从标准上讲,这意味着物理学以外的科学内容必须是可以从物理学中推导出来的(加上将物理概念与非物理概念联系起来的“桥梁”原则),因此,任何科学都可以解释的一切都可以用物理学来解释。这就是卢瑟福令人难忘的俏皮话背后的观点——有时被称为“还原论”——“有物理学;还有集邮”。
6


这种极端的还原论真的非常难以置信,科学实践是否真的符合它也是非常值得怀疑的。从这个意义上说,很少有非物理科学实际上被简化为物理学,而且未来的科学似乎不太可能将所有科学简化为物理学。如果有什么不同的话,科学似乎变得更加多样化,而不是更加统一。出于这个原因(和其他人)认为我们可以区分心灵可以被机械地解释(或根据某些科学或其他方面进行因果解释)的一般观点和更极端的还原论论点。人们可以相信可以有一门心灵科学,而不相信这门科学必须归结为物理学。这将是本书的一个指导性假设——尽管我在这里并没有假装为此争论。
7


我自己的观点,也是我在本书中试图捍卫的观点,即 心灵的机械解释 必须至少)证明心灵是如何成为世界的一部分——哲学 所说的 “因果顺序” 世界。 对心智的机械解释必须做的另一件事是给出 描述心智中因果规律概括的细节换句话说心灵机械解释致力于 心理学自然法则的存在 正如物理学发现了 支配非心理世界法则一样,心理学也发现了支配心灵的法则:可以有一门关于心灵的自然科学。

Yet while this view is embraced by most philosophers of mind in its broad outlines, its application to many of the phenomena of mind is deeply problematic. Two kinds of phenomenon stand out as obstacles to the mechanical view of mind: the phenomenon of consciousness and the phenomenon of thought. Hence, recent philosophy of mind’s preoccupation with two questions: first, how can a mere mechanism be conscious?; and, second, how can a mere mechanism think about and represent things? The central theme of this book is that generated by the second question: the problem of thought and mental representation. Thus, Chapters 1–5 are largely concerned with this problem. But a full treatment of the mechanical mind also needs to say something about the problem of conscious-
然而尽管 大多数心灵哲学家接受了 这种观点但它许多心灵现象的应用却存在严重的问题。 两种 现象机械心智观的障碍:意识现象思想现象 因此,最近的心灵哲学关注两个问题:第一,一个纯粹机制怎么可能是有意识的?其次,单纯的机制如何思考和表示事物?这本书的中心主题是由第二个问题产生的:思想和心理表征的问题。因此,第 1-5 章主要关注这个问题但是,对机械思维的全面处理需要对意识的问题有所说明——

ness: no mechanical theory of the mind which failed to address this most fundamental mental phenomenon could be regarded as a com- plete theory of the mind. This is the subject matter of Chapter 6.
ness:任何未能解决这一最基本心理现象机械理论都不能被视为完整的心智理论。这是第 6 章的主题。

1

The puzzle of representation
表征之谜

When NASA sent the Pioneer 10 space probe to explore the solar system in 1972, they placed on board a metal plate, engraved with various pictures and signs. On one part of the plate was a diagram of a hydrogen atom, while on another was a diagram of the relative sizes of the planets in our solar system, indicating the planet from which Pioneer 10 came. The largest picture on the plate was a line drawing of a naked man and a naked woman, with the man’s right hand raised in greeting. The idea behind this was that when Pioneer
1972 年,当美国宇航局先驱 10 号太空探测器探索太阳系时,他们在船上放置了一块金属板,上面刻有各种图片和标志。板子的一部分是原子图,而另一部分太阳系中行星的相对大小图,表明了先驱者 10 号来自哪颗行星。盘子上最大的图画是一个裸体男人和一个裸体女人的线条图,男人举起右手招呼。背后的想法 Pioneer

10 eventually left the solar system it would pursue an aimless journey through space, perhaps to be discovered in millions of years time by some alien life form. And perhaps these aliens would be intelligent, and would be able to understand the diagrams, recognise the extent of our scientific knowledge, and come to realise that our intentions towards them, whoever they may be, are peaceful.
10 最终离开了太阳 它将无目的地穿越太空,也许会在数百万某种外星生命形式发现 也许这些外星人 会很聪明,能够理解图表,认识到我们的科学知识的广度,并意识到我们对他们的意图,无论他们是谁,都是和平的。

It seems to me that there is something very humorous about this story. Suppose that Pioneer 10 were to reach some distant star. And suppose that the star had a planet with conditions that could sustain life. And suppose that some of the life forms on this planet were intelligent and had some sort of sense organs with which they could perceive the plate in the spacecraft. This is all pretty unlikely. But even having made these unlikely suppositions, doesn’t it seem even more unlikely that the aliens would be able to understand what the symbols on the plate mean?
在我看来,这个故事有一些非常幽默的地方。假设先驱者 10号要到达某个遥远的恒星。假设颗恒星一颗行星,其条件可以维持生命。 假设 这个星球上的一些生命形式是有智慧并且有某种感觉器官它们可以感知宇宙飞船中的板。这一切都不太可能。但是,即使做出了这些不太可能假设, 外星人不是更不可能理解盘子上的符号是什么意思吗?

Think about some of the things they would have to understand. They would have to understand that the symbols on the plate were symbols that they were intended to stand for things, and were not just random scratches on the plate, or mere decoration. Once the aliens knew that they were symbols, they would have to understand what sort of symbols they were: for example, that the diagram of the hydrogen atom was a scientific diagram and not a picture. Then ...

8

The puzzle of representation

they would have to have some idea of what sorts of things the symbols symbolised: that the drawing of the man and woman sym- bolised life forms rather than chemical elements, that the diagram of the solar system symbolises our part of the universe rather than the shape of the designers of the spacecraft. And perhaps most absurd of all even if they did figure out what the drawings of the man and woman were, they would have to recognise that the raised hand was a sign of peaceful greeting rather than of aggression, impatience or contempt, or simply that it was the normal position of this part of the body. ...

When you consider all this, doesn’t it seem even more unlikely that the imagined aliens would understand the symbols than that the spaceship would arrive at a planet with intelligent life in the first place? ...

One thing this story illustrates, I think, is something about the philosophical problem or puzzle of representation. The drawings and symbols on the plate represent things – atoms, human beings, the solar system – but the story suggests that there is something puz- zling about how they do this. For when we imagine ourselves into the position of the aliens, we realise that we can’t tell what these symbols represent just by looking at them. No amount of scrutiny of the marks on the plate can reveal that these marks stand for a man, and these marks stand for a woman, and these other marks stand for a hydrogen atom. The marks on the plate can be understood in many ways, but it seems that nothing in the marks themselves tells us how to understand them. Ludwig Wittgenstein, whose philosophy was dominated by questions about representation, expressed it suc- cinctly: ‘Each sign by itself seems dead; what gives it life?’. ...8

The philosophical puzzle about representation can be put simply: how is it possible for one thing to represent something else? Put like this, the question may seem a little obscure, and it may be hard to see exactly what is puzzling about it. One reason for this is that representation is such a familiar fact of our lives. Spoken and writ- ten words, pictures, symbols, gestures, facial expressions can all be seen as representations, and form the fabric of our everyday life. It is only when we start reflecting on things like the Pioneer 10 story that ...

12

we begin to see how puzzling representation really is. Our words, pictures, expressions and so on represent, stand for, signify or mean things – but how? ...

On the one hand, representation comes naturally to us. When we talk to each other, or look at a picture, what is represented is often immediate, and not something we have to figure out. But, on the other hand, words and pictures are just physical patterns: vibrations in the air, marks on paper, stone, plastic, film or (as in Pioneer 10) metal plates. Take the example of words. It is a truism that there is nothing about the physical patterns of words themselves which makes them represent what they do. Children sometimes become familiar with this fact when they repeat words to themselves over and over until they seem to ‘lose’ their meaning. Anyone who has learned a foreign language will recognise that, however natural it seems in the case of our own language, words do not have their meaning in and of themselves. Or as philosophers put it: they do not have their meaning ‘intrinsically’. ...

On the one hand, then, representation seems natural, spontane- ous and unproblematic. But, on the other hand, representation seems unnatural, contrived and mysterious. As with the concepts of time, truth and existence (for example) the concept of representa- tion presents a puzzle characteristic of philosophy: what seems a natural and obvious aspect of our lives becomes, on reflection, deeply mysterious. ...

This philosophical problem of representation is one main theme of this book. It is one of the central problems of current philosophy of mind. And many other philosophical issues cluster around this problem: the place of the mind in nature, the relation between thought and language, the nature of our understanding of one an- other, the problem of consciousness and the possibility of thinking machines. All these issues will be touched on here. The aim of this chapter is to sharpen our understanding of the problem of represen- tation by showing how certain apparently obvious solutions to it only lead to further problems. ...

The idea of representation ...

I’ll start by saying some very general things about the idea of representation. Let’s not be afraid to state the obvious: a represen- tation is something that represents something. I don’t say that a representation is something that represents something else, because a representation can represent itself. (To take a philosophically famous example, the ‘Liar Paradox’ sentence ‘This sentence is false’ represents the quoted sentence itself.) But the normal case is where one thing the representation itself represents another thing – what we might call the object of representation. We can therefore ask two questions: one about the nature of representations and one about the nature of objects of representation. ...

What sorts of things can be representations? I have already mentioned words and pictures, which are perhaps the most obvious examples. But, of course, there are many other kinds. The diagram of the hydrogen atom on Pioneer 10’s plate is neither a bunch of words nor a picture, but it represents the hydrogen atom. Numerals, such as 15, 23, 1001, etc., represent numbers. Numerals can rep- resent other things too: for example, a numeral can represent an object’s length (in metres or in feet) and a triple of numerals can represent a particular shade of colour by representing its degree of hue, saturation and brightness. The data structures in a computer can represent text or numbers or images. The rings of a tree can represent its age. A flag can represent a nation. A political demon- stration can represent aggression. A piece of music can represent a mood of unbearable melancholy. Flowers can represent grief. A glance or a facial expression can represent irritation. And, as we shall see, a state of mind a belief, a hope, a desire or a wish can represent almost anything at all. ...

There are so many kinds of things that can be representations that it would take more than one book to discuss them all. And, of course, I shall not try to do this. I shall focus on simple examples of representation in language and in thought. For instance, I will talk about how it is that I can use a word to represent a particular person, or how I can think (say) about a dog. I’ll focus on these simple ...

examples because the philosophical problems about representation arise even in the simplest cases. Introducing the more complex cases – such as how a piece of music can represent a mood – will at this stage only make the issue more difficult and mind-boggling than it is already. But to ignore these complex cases does not mean that I think they are unimportant or uninteresting. ...9


现在来谈谈我们的第二个问题:什么样的事物可以成为表征的对象 显然, 答案几乎是任何事情。文字图片可以表示物理 对象, 例如 房屋。 它们可以 表示物理对象的特征 属性,例如人的形状或房屋的颜色句子,比如 句子 'Someoneisinmyhouse',可以代表我们可能称之为事实、情况态的东西:在这种情况下 某人 我的房子。 非物理对象也可以 表示如果有数字,它们显然不是物理对象物理世界中数字 3 在哪里 表征 例如 文字、图片、音乐面部表情 可以代表情绪、感受情绪。 而表征可以表示不存在 事物 我可以 思考——也就是说代表——独角兽、最大的素数。这些东西都不存在;但它们都可以成为代表的“对象”。


最后一个例子表明了表示的一个奇怪特征。从表面上“X代表Y”句话表明 rep- resentation件事之间的关系但是,两件事之间的关系通常意味着 件事存在。接吻的关系为例,如果亲吻圣诞老那么圣诞老人和必须都存在。 而圣诞老人不存在 的事实解释了为什么我不能亲吻他。


表现并非如此如果想想圣诞老因此代表他,并不意味着圣诞老存在。 圣诞老人的不存在并不碍我代表他,就像我亲吻他一样。以这种方式,表示似乎与其他关系非常不同。正如我们稍后将看到的,许多哲学家已经将表征的这一方面作为其本质的核心。


因此,有许多种类的表征,也有很多种事物可以成为表征的对象我们如何在理解代表性方面取得任何进展?我们可以问两种问题:


首先,我们可以一些特定类型的表现形式——图片、文字或其他任何东西——是如何设法表现的。 我们想知道的是,正是 这种表征使发挥了代表 作用。 (作为 示例,在下面考虑了图片可能通过相似代表事物的想法。显然,我们不会假设关于一种表现形式的故事 必然适用于所有其他形式:例如,图片的表现方式音乐的表现方式不同


其次,我们可以,某种特定的表示形式是否比其他形式基本更基本。也就是说,我们能否用其他种类来解释某些类型的表示。例如:当前哲学中的一个问题是,我们是否可以心理状态表征能力来解释语言的表征方式,或者我们是否需要用语言来解释心理表征。如果有一种表征比其他 种类基本那么我们显然正在理解整个表征。


我个人的观点是,心理表征 – 通过心理状态表征世界 – 是最基本的表征形式了解如何是一个合理的观点,我们需要简要地看一下图像和语言的表现。


图片相似之处


从表面上看,图片的表示方式似乎比其他形式的表示更直接。因为,虽然“狗”这个词没有任何内在的东西使它代表狗,但狗的图片肯定有一些内在的东西使它代表一只狗——即图片的样子。狗的照片看起来有点像狗——它们在某种程度上像狗,他们这样做因为它们内在特征:它们的形状、颜色


等等。那么,也许一张图片代表了它的作用因为它与那个东西很相似。


图片通过相似来表示的想法将是上面提到的第一类问题的答案:特定类型的表示如何设法表示?答案是:图片通过相似这些事物来代表事物(这个答案可以用作回答第二个问题的基础:建议是所有其他形式的表现都可以 图像表示来解释。正如我们 在下面看到的,这个想法没有希望的。 让我们这个想法称为“图像表现 相似性理论”,简称相似性理论”。为了准确地讨论相似理论我们需要一些基本的哲学术语。

Philosophers distinguish between two ways in which the truth of one claim can depend on the truth of another. They call these two ways ‘necessary’ and ‘sufficient’ conditions. To say that a particular claim, A, is a necessary condition for some other claim, B, is to say this: B is true only if A is true too. Intuitively, B will not be true without A being true, so the truth of A is necessary (i.e. needed, required) for the truth of B.
哲学家区分了两种方式,一种说法的真理可以取决于另一种说法的真理。他们称这两种方式为“必要”和“充分”条件。说一个特定的主张 A 是 另一个主张 B 的必要条件,就是说:只有当 A 也是真的时,B 才是真的。直觉上,如果 A 不为真,B 就不会为真,因此 A 的真值对于 B 的真值是必要的(即需要的、必需的)。

To say that A is a sufficient condition for B is to say this: if A is true, then B is true too. Intuitively, the truth of A ensures the truth of B – or, in other words, the truth of A suffices for the truth of B. To say that A is a necessary and sufficient condition for the truth of B is to say this: if A is true, B is true, and if B is true, A is true. (This is sometimes expressed as ‘A is true if and only if B is true’, and ‘if and only if’ is sometimes abbreviated to ‘iff’.)
说 A 是 B 的充分条件就是这样说:如果 A 是真的,那么 B 也是真的。直觉上,A 的真值确保了 B 的真值——或者换句话说,A 的真值足以证明 B 的真值。说AB为真的必要充分条件就是如果A真,B真,如果B真,则 A真。(这有时表示仅当B真时 A ”,“当且仅当”有时缩写为“iff”。

Let’s illustrate this distinction with an example. If I am in London, then I am in England. So being in England is a necessary condition for being in London: I just can’t be in London without being in England. Likewise, being in London is a sufficient condi- tion for being in England: being in London will suffice for being in England. But being in London is clearly not a necessary condition for being in England, as there are many ways one can be in England without being in London. For the same reason, being in England is not a sufficient condition for being in London.
让我们一个例子来说明这种区别 如果我 在伦敦,那么我就在英格兰。所以在英国是 留在伦敦必要条件不能不在英国就伦敦。同样,在伦敦是在英国充分条件伦敦足以成为在英国。但身在伦敦显然不是英国的必要条件,因为身处英国的方式有很多。出于同样的原因,在英国并不是在伦敦的充分条件。

The resemblance theory takes pictorial representation to depend on the resemblance between the picture and what it represents. Let’s express this dependence more precisely in terms of necessary and sufficient conditions: a picture (call it P) represents something (call it X) if and only if P resembles X. That is, a resemblance between P and X is both necessary and sufficient for P to represent X.
相似性理论认为图像表示取决于图片所代表的事物之间的相似性让我们 必要和充分的条件来更准确地表达这种依赖性:仅当P类似于X 时,图片(称P)代表某物(称其X)。也就是说P 和 X 之间的相似对于 P 表示 X 既必要又充分。


这种 表达 相似性理论 的方式肯定我们最初的模糊表述更精确 但是,不幸的是,这种精确的方式表达它只会暴露的问题。让我们先假设 相似性可能是 图像表示的充分条件


相似性足以表示就是这样说:如果 X 类似于 Y,那么 X 代表 Y。首先应该让我们印象深刻的是 'similars' 有点模糊。因为,从某种意义上说,几乎所有事物与其他事物相似这个意义上说,“相似只是该事物具有某种共同特征。所以,从这个意义上说,我不仅像我的父亲我的母亲,因为长得他们,而且还像我的桌子——我的桌子都是实物——而且数字 3 – 数字 3 和 I 都是这样或那样的对象。但我不是这些事情中的任何一个的代表。


如果我们想让相似性成为表现的基础也许我们需要缩小某物与其他事物相似的方式或方面请注意如果我们说,如果X在某些方面Y 相似,那么X代表Y,那么 X 代表 Y,因为在某些方面父亲相似——比如,格特征——但这并不能使我成为他的代表。而且,显然,我们不想补充说 X 必须在 X 代表 Y 的那些方面与 Y 相似因为这会使相似性理论循环且没有信息:如果 X 在 X 代表 Y 的那些方面与 Y 相似那么X代表Y。几乎不可能是对代表性概念的分析。


作为 充分条件的相似性还有一个 问题。假设我们指定了某些方面,其中某物与其他事物相似例如,拿破仑的照片可能在面部表情上类似于拿破仑比例


body、手臂特征位置但是 关于相似性,似乎是一个明显的事实如果X类似于Y,那么 Y也类似于X。如果我在某些方面与我的父亲相似,那么父亲 在某些方面也与相似。 但这 并不适用于代表性。如果图片拿破仑,那么拿破仑就像图片拿破仑 并不代表这幅图景。 因此如果我们要避免使每个图片中的物体本身都是其图片的图像再现,那么相似就不足 以引起图像的反感。


最后,我们应该考虑一个明显的事实,即一切都会自我相似(哲学家们说相似是一种决定性关系。如果相似性被认为是表征的充分条件,那么一切都代表自己。但这是荒谬的。我们不应该对一切都变成自身图画的图像表示理论感到满意。这完全淡化了图像表现的概念。


因此相似性可能是图像表现充分条件的想法无望的。10是否意味着 相似理论失败了?还不是因为相似性理论可以说,虽然相似性不是一个充分条件,但它是一个必要条件。也就是说如果图片P代表X,那么P在某些方面类似于 X——尽管反之则不然。我们应该如何看待这个建议?


从表面上看,这似乎非常有道理。如果一幅肖像代表女王,那么它肯定在某些方面与她相似。毕竟,这可能就是肖像的“好像”的意义所在。但 这个想法存在问题 因为 一张图片当然可以代表某物,但又不会它。 很多 20 世纪的艺术都是具象的;但这并不是说基于 相似性(想想立体主义画作)。 漫画和示意图就像简笔画一样通常它们所代表事物 几乎没有相似之处。然而,我们通常可以轻松识别它们所代表的含义女王的漫画可能与她的相似度要得多,而不是其他人的细节画。然而,这幅漫画仍然是女王的照片。11


那么要满足代表性必要条件,需要多少相似之处呢?也许可以回答说,所需要的 只是这幅画所代表的东西之间存在一些相似之处,无论多么松散也许可以松散地理解相似性,以纳入立体主义图片中涉及的表现形式。很好;但现在相似性的概念理论的作用不如以前那么如果一张示意图图片(比如,某些公司在其徽标中使用的那种)只需要以非常小的方式与它所代表的事物相似那么很难看出“如果图片代表X,必须类似于X'。因此即使一幅画确实所代表相似也必须相似性以外的因素 进入 再现使其成为可能。

I
否认图片 经常它们所代表的相似。显然他们这样做了,这可能是使他们成为图片的一部分原因(而不是句子、图形或图表)。唯一质疑的是相似性的概念是否能很好地解释图片是如何表现的。相似性是 图像表现必要条件的想法很可能 是正确的;但问题是“还有什么能使图片代表它的作用?12


这里 需要 强调一点 图片通常需要解释。例如,米开朗基罗的《最后的审判》中,在西斯廷教堂,我们看到地狱中的灵魂迎接最后的结局痛苦中挣扎,他们头顶上基督的巨大形象了他的判断。为什么我们没有看到的基督从深处欢迎灵魂起手友好地鼓励——“嘿,来吧,这里更凉爽”?(还记得Pioneer10金属板上 问候的照片 嗯,我们可以; 但我们 没有。 原因是,我们根据我们对它所做的某些假设来看待这幅画——我们可以模糊地称之为这幅画的“背景”。我们知道幅画最后审判的画像,在最后审判一些灵魂永恒的诅咒,以基督为审判者,等等。这就是我们以这种方式看待画面的部分原因:我们解读它。


我们可以 维特根斯坦的例子来说明这一点13想象一下 一个男人拿拐杖斜坡(见图1.1)。 是什么照片是一个男人斜坡,而不是一个男人轻轻地滑下 斜坡? 照片什么都没有 正是因为我们日常经验习惯了什么,以及我们习惯于看到这些图片的那种背景,我们一种方式而不是 另一种方式看待这幅画 我们必须根据这个语境解释这幅图画——图画不会自我解释。


我不打算进一步追究相似性理论或图片解释在这里提到是为了说明相似性的概念告诉我们关于图像表现的信息是多么我现在想做的是简要考虑一下我在一节末尾提出的第二个问题并将其应用于图像表示。我们可以这样提出问题假设我们有一个完整的图像表示理论。那么,是否有可能用图像表示来解释所有其他形式的表现呢?


答案是否定的,原因有很多。我们已经的一个原因图片经常需要被解读,读应该是另一张图片也无于事,


1.1棍子


因为那可能也需要解释。但是,尽管答案是否定的,但我们可以通过了解图像表现的局限性来了解表现的本质。


一个简单的例子可以说明这一点。假设'如果今天下午下雨我们散步'。这是一个相当简单的句子 – 一种语言表示。但是假设我们想图像表示解释所有的表示;我们需要 能够用图片来表达这种语言表示 。我们该怎么做呢?


好吧,也许我们可以一张我走在其中的无雨场景的图片。但是,我们如何看待“今天下午”的概念呢?我们不能在图片中放置时钟:请记住,我们试图将所有表示简化为图片,而时钟并不能 通过描绘表示时间 事实上,“描绘”时间的想法没有什么意义。

And there is a further reason why this first picture cannot be right: it is just a picture of you and me walking in a rain-free area. What we wanted to express was a particular combination and relationship between two ideas: first, it’s not raining, and, second, you and me going for a walk. So perhaps we should draw two pictures: one of the rain-free scene and one of you and me walking. But this can’t be right either: for how can this pair of pictures express the idea that if it doesn’t rain, then we will go for a walk? Why shouldn’t the two pictures be taken as simply representing a non-rainy scene and you and me going for a walk? Or why doesn’t it represent the idea that either we will go for a walk or it won’t rain? When we try to represent the difference between . . . and . . . , if . . . then . . . , and either . . . or . . . in pictures, we draw a complete blank. There just seems no way of doing it.
第一张照片不可能正确的还有一个原因:它只是一张你我走在无雨地区的照片。 我们想表达的是两个想法之间的特殊组合关系:首先,不 下雨,其次,你和我去散步。所以也许我们应该画两幅画:一幅是无雨的场景,另一幅是你我走路的画面。但这也不可能是正确的因为 两张 图片怎么能表达这样的想法:如果不下雨,那么我们就会散步呢?为什么两张照片不能简单地代表一个没有雨的场景你我去散步呢?或者为什么它不代表我们要么出去散步,要么不下雨的想法?当我们试图表现 . . .和。。。 如果 . . .然后 . . . , . . . .或。。。 在图片中,我们画了一个完整的空白。似乎没有办法做到这一点。

One important thing that pictures cannot do, then, is represent certain sorts of relations between ideas. They cannot represent, for example, those relations which we express using the words if . . . then . . . , . . . and . . . , either . . . or and not. (Why not? Well, the picture of the non-rainy scene may equally be a picture of a sunny scene – how can we pictorially express the idea that the scene is a scene where there is no rain? Perhaps by drawing rain and putting a
因此 图片无法做的一件重要事情代表思想之间的某种关系。例如,它们不能代表 我们 if . . .然后 . . . . 和。。。 要么 . . .or not。(为什么 不是吗?那么,不下雨场景的图片可能同样是晴朗场景的图片——我们如何用图片来表达这个场景是没有场景的想法呢?也许通过抽雨放置

cross through it as in a ‘No Smoking’ sign but again we are using something that is not a picture: the cross.) For this reason at least, it is impossible to explain or reduce other forms of representation to pictorial representation.
穿过——就像在“禁止吸烟”标志一样——但我们再次使用了不是图片的东西十字架。至少出于这个原因不可能将其他形式的表现解释或简化为图像表现。

Linguistic representation
语言表示

A picture may sometimes be worth a thousand words, but a thou- sand pictures cannot represent some of the things we can represent using words and sentences. So how can we represent things using words and sentences?
一张图片有时可能胜过千言万语,但一千张图片无法代表我们用文字和句子所能表示的一些事物。那么我们如何用单词和句子来表示事物呢?

A natural idea is this: ‘words don’t represent things in any natu- ral way; rather, they represent by convention. There is a convention among speakers of a language that the words they use will mean the same thing to one another; when speakers agree or converge in their conventions, they will succeed in communicating; when they don’t, they won’t’.14
一个自然的想法是这样的:“词语不以任何自然的方式代表事物;相反,他们通过约定俗成的方式代表一种语言使用者之间有一个约定 即他们使用的语对彼此的含义相同;当说话者同意或趋同于他们的约定时,他们将成功地进行交流;当他们不同意时,他们就不会'。 14

It is hard to deny that what words represent is at least partly a matter of convention. But what is the convention, exactly? Consider the English word ‘dog’. Is the idea that there is a convention among English speakers to use the word ‘dog’ to represent dogs, and only dogs (so long as they are intending to speak literally, and to speak the truth)? If so, then it is hard to see how the convention can explain representation, as we stated the convention as a ‘convention to use the word “dog” to represent dogs’. As the convention is stated by using the idea of representation, it takes it for granted: it cannot explain it. (Again, my point is not that convention is not involved in linguistic representation; the question is rather what the appeal to convention can explain on its own.)
很难否认,词语所代表的至少在一定程度上是一个约定俗成的问题约定成究竟是什么考虑一下英语单词“dog”。英语的人之间有一种惯例,用 “dog” 这个词来表示狗,而且只用狗(只要它们打算按字面意思说,说真话)吗? 如果是这样,那么 很难 看出 这个约定如何解释表征,因为我们将这个约定成是'使用'狗'这个词代表狗的约定”。由于约定通过使用表示的概念来陈述的,它认为它是理所当然的:它无法解释它。(同样,我的观点不是语言表征不涉及约定俗成;问题是吸引力是什么to convention 可以自行解释。

An equally natural thought is that words represent by being conventionally linked to the ideas that thinkers intend to express by using those words. The word ‘dog’ expresses the idea of a dog, by means of a convention that links the word to the idea. This theory has a distinguished philosophical history: something like it goes back at least as far as Thomas Hobbes (1588–1679), and especially to John Locke (1632–1704), who summed up the view by saying that words are the ‘sensible marks of ideas’.15 ...

What are ideas? Some philosophers have held that they are something like mental images, pictures in the mind. So when I use the word ‘dog’, this is correlated with a mental image in my mind of a dog. A convention associates the word ‘dog’ with the idea in my mind, and it is in virtue of this association that the word represents dogs. ...

There are many problems with this theory. For one thing, is the image in my mind an image of a particular dog, say Fido? But, if so, why suppose that the word ‘dog’ means dog, rather than Fido? In addition, it is hard to imagine what an image of ‘dogness’ in general would be like.16 And even if the mental image theory of ideas can in some way account for this problem, it will encounter the problem mentioned at the end of the last section. Although many words can be associated with mental images, many can’t: this was the problem that we had in trying to explain and, or, not and if in terms of pictures. ...

However, perhaps not all ideas are mental images often we think in words, for example, and not in pictures at all. If so, the criticisms in the last two paragraphs miss the mark. So let’s put to one side the theory that ideas are mental images, and let’s just con- sider the claim that words represent by expressing ideas whatever ideas may turn out to be. ...

This theory does not appeal to a ‘convention to represent dogs’, so it is not vulnerable to the same criticism as the previous theory. But it cannot, of course, explain representation, because it appeals to ideas, and what are ideas but another form of representation? A dog-idea represents dogs just as much as the word ‘dog’ does; so we are in effect appealing to one kind of representation (the idea) to explain another kind (the word). This is fine, but if we want to explain representation in general then we also need to explain how ideas represent. ...

Perhaps you will think that this is asking too much. Perhaps we do not need to explain how ideas represent. If we explain how words represent by associating them with ideas, and explain too how pic- tures are interpreted in terms of the ideas that people associate with them in their minds, perhaps we can stop there. After all, we can’t ...

explain everything: we have to take something for granted. So why not take the representational powers of ideas for granted? ...

I think this is unsatisfactory. If we are content to take the repre- sentational powers of the mind for granted, then why not step back and take the representational powers of language for granted? For it’s not as if the mind is better understood than language in fact, in philosophy, the reverse is probably true. Ideas, thoughts and mental phenomena generally seem even more mysterious than words and pictures. So, if anything, this should suggest that we should explain ideas in terms of language, rather than vice versa. But I don’t think we can do this. So we need to explain the representational nature of ideas. ...

Before moving on to discuss ideas and mental representation, I should be very clear about what I am saying about linguistic repre- sentation. I am not saying that the notions I mentioned of conven- tion, or of words expressing ideas are the only options for a theory of language. Not at all. I introduced them only as illustrations of how a theory of linguistic representation will need, ultimately, to appeal to a theory of mental representation. Some theories of lan- guage will deny this, but I shall ignore those theories here.17 ...

The upshot of this discussion is that words, like pictures, do not represent in themselves (‘intrinsically’). They need interpret- ing they need an interpretation assigned to them in some way. But how can we explain this? The natural answer, I think, is that interpretation is something which the mind bestows upon words. Words and pictures gain the interpretations they do, and therefore represent what they do, because of the states of mind of those who use them. But these states of mind are representational too. So to understand linguistic and pictorial representation fully, we have to understand mental representation. ...

Mental representation ...

So how does the mind represent anything? Let’s make this question a little easier to handle by asking how individual states of mind rep- resent anything. By a ‘state of mind’, or ‘mental state’, here I mean ...

something like a belief, a desire, a hope, a wish, a fear, a hunch, an expectation, an intention, a perception and so on. I think that all of these are states of mind which represent the world in some way. This will need a little explaining. ...

When I say that hopes, beliefs, desires and so on represent the world, I mean that every hope, belief or desire is directed at some- thing. If you hope, you must hope for something; if you believe, you must believe something; if you desire, you must desire something. It does not make sense to suppose that a person could simply hope, without hoping for anything; believe, without believing anything; or desire, without desiring anything. What you believe or desire is what is represented by your belief or desire. ...

We will need a convenient general term for states of mind which represent the world, or an aspect of the world. I shall use the term ‘thought’, as it seems the most general and neutral term belonging to the everyday mental vocabulary. From now on in this book, I will use the term ‘thought’ to refer to all representational mental states. So states of belief, desire, hope, love and so on are all thoughts in my sense, as they all represent things. (Whether all mental states are thoughts in this sense is a question I shall leave until the end of the chapter.) ...

What can we say in general about how thoughts represent? I shall start with thoughts which are of particular philosophical inter- est: those thoughts which represent (or are about) situations. When I hope that there will be bouillabaisse on the menu at my favourite restaurant tonight, I am thinking about a number of things: bouil- labaisse, the menu, my favourite restaurant, tonight. But I am not just thinking about these things in a random or disconnected way: I am thinking about a certain possible fact or situation: the situation in which bouillabaisse is on the menu at my favourite restaurant tonight. It is a harmless variant on this to say that my state of hope represents this situation. ...

However, consider a different thought I might have: the belief that there is bouillabaisse on the menu tonight. This mental state does not represent the situation in quite the same sense in which the hope does. When I believe that there is bouillabaisse on the menu ...

tonight (perhaps because I have walked past the restaurant and read the menu), I take the situation in question to be the case: I take it as a fact about the world that there is bouillabaisse on the menu tonight. But, when I hope, I do not take it to be a fact about the world; rather, I would like it to be a fact that there is bouillabaisse on the menu tonight. ...


因此 这些想法 有两个方面所代表的“情境”我们可以称之为(因为没有更好的词)我们采取的态度情况。对情况的不同态度的想法最好地用例子来说明。


考虑 一下我访问布达佩斯的情况 我可以 预期我会去布达佩斯;我希望我能去布达佩斯;并且可以 相信 去过 布达佩斯。 所有这些想法都是关于或代表相同的情况——访问了布达佩斯——但对这种情况的态度却大不相同。因此,问题出现了,是什么让这些不同的态度不同;但目前关心区分所代表的情况和对它的态度。


正如相同的情况可以受到 不同态度的影响一样,同样的态度可以涉及许多不同的情况。 我真的相信很快就会布达佩斯我也相信最喜欢的餐厅今晚的菜单上没有法式炖鱼,我相信无数其他的事情。因此,可以通过指定以下内容来唯一地挑选出类似的信念、希望和想法:


所讨论的态度(信念、希望、期望等);


情况代表了。


(顺便说一句,还应该指出,许多态度是有程度的:一个人可能或多或少强烈地想要某样东西;并且或多或少地相信某样东西;但这种复杂性并不影响总体情况。一般来说我们可以将这类想法描述如下。其中 'A' 代表处于心理状态的人,'' 代表态度(希腊字母 psi – “心理”),'S' 代表所代表的情况,最好的描述采用以下形式:


As那个S

For example, Vladimir (A) believes (s) that it is raining (S); Renata
例如,Vladimir(A)认为s)正在下雨(S);雷纳塔

(A) hopes (s that she will visit Romania (S) and so on.
(A)希望访问罗马尼亚——等等

Bertrand Russell (1872–1970) called thoughts that can be picked out in this way ‘propositional attitudes’ – and the label has stuck.18 Though it might seem rather obscure at first glance, the term ‘propo- sitional attitude’ describes the structure of these mental states quite well. I have already explained the term ‘attitude’. What Russell meant by ‘proposition’ is something like what I am calling ‘situation’: it is what you have your attitude towards (so a proposition in this sense is not a piece of language). A propositional attitude is therefore any mental state which can be described in the ‘A s that S’ style.
伯特兰·罗素(Bertrand Russell,1872-1970)将可以以这种方式挑选出来的思想称为“命题态度”——这个标签已经被贴上了。18虽然乍一看似乎相当晦涩难懂,但“主动态度”一词很好地描述了这些心理状态的结构。我已经解释了“态度”这个词。罗素所说的“命题”就像我所说的“情境”:它是对什么的态度(所以这个意义上的命题不是一种语言)。因此,命题态度是任何可以用S'风格的 'As来描述的心理状态

Another piece of terminology that has been almost universally adopted is the term ‘content’, used where Russell used ‘proposition’. According to this terminology, when I believe that there is beer in the fridge, the content of my belief is that there is beer in the fridge. And likewise with desires, hopes and so on these are different attitudes, but they all have ‘content’. What exactly ‘content’ is, and what it is for a mental state to have ‘content’ (or ‘representational content’), are questions that will recur throughout the rest of this book especially in Chapter 5. In current philosophy, the problem of mental representation is often expressed as: ‘What is it for a mental state to have content?’. For the time being, we can think of the content of a mental state as what distinguishes states involving the same attitude from one another. Different beliefs are distinguished from one another (or, in philosophical terminology, ‘individuated’) by their different contents. So are desires; and so on with all the attitudes.
另一个 几乎 普遍采用术语术语“内容”,Russell使用“命题”的地方使用。根据这个术语,当我相信冰箱里有啤酒时信念的内容就是冰箱啤酒 欲望、希望 也是如此 这些不同的态度,但它们都有“内容”。“内容” 到底 是什么,以及心理状态拥有“内容”(或“表征内容”)是什么,这些问题将在本书 的其余部分反复出现——尤其是在第 5 当前的哲学中,心理表征的问题通常被表述为:“心理状态有什么内容? 目前 我们可以 心理状态的内容视为将涉及相同态度的状态彼此区分开来的内容。不同的信仰因其内容的不同而彼此区分开 (或者,哲学术语来说,是“个体化的”)。 欲望也是如此; 此类所有的态度。

I have concentrated on the idea of a propositional attitude, because thoughts of this form will become quite important in the next chapter. But although all propositional attitudes are thoughts (by definition) it is important to stress that not all thoughts (in my sense) are propositional attitudes – that is, not all representational mental states can be characterised in terms of attitudes to situations. Take love, for instance. Love is a representational mental state: you cannot love without loving something or someone. But love is not
都集中在 命题态度的思想上,因为这种形式的思想在下一章中将变得相当重要。但是,尽管所有的命题态度都是思想(根据定义),但重要的是要强调并非所有的思想(在我的意义上)都是命题态度——也就是说,并非所有的表征心理状态都可以根据情境的态度描述。以为例是一种代表性的心理状态:如果不某物某人,你就无法不是

(always) an attitude to a situation love can be an attitude to a person, a place or a thing. Love cannot be described in the ‘A s that S’ style (try it and see). In my terminology then, love is a kind of thought, but not a propositional attitude.19
(总是) 某种情况的态度可以是一个人、一个地方一件事的态度不能 “As that S” 风格来描述(试一试)。那么,在我的术语中,爱是一种思想,但不是一种命题式的态度。19

Another interesting example is desire. Is this an attitude to a situation? On the face of it, it isn’t. Suppose I desire a cup of cof- fee: my desire is for a thing, a cup of coffee, not for any situation. On the surface, then, desire resembles love. But many philosophers think that this is misleading, and that it under-describes a desire to treat it as an attitude to a thing. The reason is that a more accurate description of the desire is that it is a desire that a certain situation obtains: the situation in which I have a cup of coffee. All desires, it is claimed, are really desires that so-and-so – where ‘so-and-so’ is a specification of a situation. Desire, unlike love, is a propositional attitude.
另一个有趣的例子欲望。 这是 某种情况的态度吗?从表面上看,事实并非如此。假设我想喝一杯咖啡:我的愿望是一件东西,一杯咖啡,而不是任何情况。那么,从表面上欲望类似于爱。许多哲学家认为这是误导性的,它低估了将其视为对事物的态度的愿望。原因是,对欲望更准确的描述是,它是某种情境获得的欲望:我喝杯咖啡的情境。据称,所有的欲望都是某某的真正欲望 ——其中“某某”是对某种情况的描述。欲望与爱不同,是一种命题式的态度。

Now, by calling representational mental states ‘thoughts’ I do not mean to imply that these states are necessarily conscious. Suppose Oedipus really does desire to kill his father and marry his mother. Then, by the criterion outlined above (A s that S), these desires count as propositional attitudes and therefore thoughts. But they are not conscious thoughts. ...

It might seem strange to distinguish between thought and con- sciousness in this way. To justify the distinction, we need a brief preliminary digression into the murky topic of consciousness; a full treatment of this subject will have to wait until Chapter 6. ...

Thought and consciousness ...

Consciousness is what makes our waking lives seem the way they do, and is arguably the ultimate source of all value in the world: ‘without this inner illumination’, Einstein said to the philosopher Hebert Feigl, ‘the universe would be nothing but a heap of dirt’.20 But, despite the importance of consciousness, I want to distinguish certain questions about thought from questions about conscious- ness. To a certain extent, these questions are independent of one another. ...

As I say, this may seem a little strange. After all, for many people, the terms ‘thought’ and ‘consciousness’ are practically synonymous. Surely thinking is being aware of the world, being conscious of things in and outside oneself how then can we understand thought without also understanding consciousness? (Some people even think of the terms ‘conscious’ and ‘mental’ as synonymous for them the point is even more obvious.) ...

The reason for distinguishing thought and consciousness is very simple. Many of our thoughts are conscious, but not all of them are. Some of the things we think are unconscious. So, if thought can still be thought while not being conscious, then it cannot in general be essential to something’s being a thought that it is conscious. It ought therefore to be possible to explain what makes thought what it is without having to explain consciousness. ...

What do I mean when I say that some thought is unconscious? Simply this: there are things we think, but we are not aware that we think them. Let me give a few examples, some more controversial than others. ...

I would be willing to bet that you think the President of the United States normally wears socks. If I asked you ‘Does the President of the United States normally wear socks?’ I think you would answer ‘Yes’. And what people say is pretty good evidence for what they think: so I would take your answer as good evidence for the fact that you think that the President of the United States normally wears socks. But I would also guess that the words ‘the President of the United States normally wears socks’ had never come before your conscious mind. It’s pretty likely that the issue of the President’s footwear has never consciously occurred to you before; you have never been aware of thinking it. And yet, when asked, you seem to reveal that you do think it is true. Did you only start thinking this when I asked you? Can it really be right to say that you had no opinion on this matter before I asked you? (‘Hm, that’s an interest- ing question, I had never had never given this any thought before, I wonder what the answer is ’) Doesn’t it make more sense to say ...

that the unconscious thought was there all along? ...

This example might seem pretty trivial, so let’s try a more ...

significant (and controversial) one. In Plato’s dialogue, Meno, Socrates is trying to defend his theory that all knowledge is recol- lection of truths known in the previous life of the soul. To persuade his interlocutor (Meno) of this, Socrates questions one of Meno’s slaves about a simple piece of geometry: if the area of a square with sides N units long is a certain number of units, what is the area of a square with sides 2 N units long? Under simple questioning (which does not give anything away) Meno’s slave eventually gets the correct answer. The dialogue continues: ...

Socrates: What do you think, Meno? Has he answered with any opinions that were not his own? ...

Meno:No, they were all his. ...

Socrates: Yet he did not know, as we agreed a few minutes ago. ...

Meno:True. ...

Socrates: But these opinions were somewhere in him, were they not? ...

Meno:Yes.21 ...

Socrates, then, argues that knowledge is recollection, but this is not the view that interests me here. What interests me is the idea that one can have a kind of ‘knowledge’ of (say) certain mathematical principles ‘somewhere’ in one without being explicitly conscious of them. This sort of knowledge can be ‘recovered’ (to use Socrates’s word) and made explicit, but it can also lie within someone’s mind without ever being recovered. Knowledge involves thinking of something; it is a kind of thought. So if there can be unconscious knowledge, there can be unconscious thought. ...

There are some terminological difficulties in talking about ‘un- conscious thoughts’. For some people, thoughts are episodes in the conscious mind, so they must be conscious by definition. Certainly, many philosophers have thought that consciousness was essential to all mental states, and therefore to thoughts. Descartes was one – to him the idea of an unconscious thought would have been a contradiction in terms. And some today agree with him.22 ...

However, I think that these days many more philosophers (and ...

non-philosophers too) are prepared to take very seriously the idea of an unconscious thought. One influence here is Freud’s contribu- tion to the modern conception of the mind. Freud recognised that many of the things that we do cannot be fully accounted for by our conscious minds. What does account for these actions are our un- conscious beliefs and desires, many of which are ‘buried’ so deep in our minds that we need a certain kind of therapy – psychoanalysis – to dig them out.23 ...

Notice that we can accept this Freudian claim without accepting specific details of Freud’s theory. We can accept the idea that our actions can often be governed by unconscious beliefs and desires, without accepting many of the ideas (popularly associated with Freud’s name) about what these beliefs and desires are, and what causes them e.g. the Oedipus complex, or ‘penis envy’. In fact, the essential idea is very close to our ordinary way of thinking about other people’s minds. We all know people whom we think do not ‘know their own minds’, or who are deceiving themselves about something. But how could they fail to be aware of their own thoughts, if thoughts are essentially conscious? ...


无论如何,由于所有这些原因,我认为存在无意识的思想,我也认为我们不需要为了理解思想而理解意识。这并不意味着我否认有意识思维这样的东西。讨论的例子是被带入意识的思想的例子——你把美国总统通常穿袜子的想法带入你的意识头脑,美诺的奴隶把他没有意识到自己拥有的几何知识带入他的意识头脑,精神分析的病人把他们不知道自己拥有的想法和感受带入他们的意识头脑。整本书将采用的许多例子都是有意识的思想。但我感兴趣的是是什么让他们有思想,而不是是什么让他们有意识
.


数学家和物理学家罗杰·彭罗斯 (Roger Penrose在他的著名著作《皇帝的新思想》中声称,“真正的智慧需要意识”。24看起来似乎不同意句话;实际上我不是真正的智能(或


思想)需要意识并不意味着要理解思想 本质我们必须 理解 意识的本质只是意味着任何思考的东西必须是有意识的。一个类比可能会有所帮助:任何会思考或有智慧的东西都必须是活的。或。如果是这样,那么“真正的智慧需要生命”。但这本身并不意味着为了理解思想,我们必须理解生活。我们 只需要假设思考 事物也是活的。我们对思想的解释不会也是对生活的解释。意识也是如此。所以我并不反对彭罗斯评论。 但我也不同意 我在这个问题上保持中立,因为我不知道是否有一种 生物 思想,思想完全是无意识的。 但是,幸运的是, 不需要 回答这个难题来追求这本书的主题。


那么,许多想法都是无意识的。现在是时候回到心理表征的概念上来了。关于 心理表征,我们学到了什么? 到目前为止,还不多。 然而,非常笼统术语描述思想的概念以及阐明态度内容(或情境)之间的区别时,我们已经开始了。我们现在至少有一些基本类别可以处理,以提出我们关于心理表征性质的问题在下一节中,把到目前为止讨论与哲学传统的一些重要思想联系起来


意向


哲学家们有一个技术术语来描述心理状态的表征性质他们之为“意向性”。 那些表现出意向性的心理状态 那些代表的心理状态 因此 有时被称为 “意向性状态”。 这个术语可能会令人困惑,特别是因为并非所有哲学家相同的方式使用这些术语。 有必要 考虑 意向性的概念因为它 构成了 大多数哲学家试图处理表征之谜起点


“意向性”一词源自经院哲学家


中世纪的 人,他们对 rep resentation 问题非常感兴趣。 这些哲学家使用 “intentio”一词来表示概念,例如,托马斯·阿奎那c.1225-1274使用“esseintentionale”(有意识的存在)一词来表示 事物头脑概念上 表示的方式 “意向性存在”(“不存在”)一词德国哲学家弗朗茨·布伦塔诺(Franz Brentano,1838-1917 年)复兴 在他的作《从 实证的角度心理学》(1874 年)中,布伦塔诺声称心理现象具有特征:

. . .
通过世纪院哲学家所说的有意的 .。对象的不存在,以及我们 称之为与内容的关系、对象的方向(在这里不能被理解为现实)或内在客观性。25

Things are simpler here than they might initially seem. The phrases ‘intentional inexistence’, ‘relation to a content’ and ‘immanent objectivity’, despite superficial differences between them, are all different ways of expressing the same idea: that mental phenomena involve representation or presentation of the world. ‘Inexistence’ is meant to express the idea that the object of a thought – what the thought is about – exists in the act of thinking itself. This is not to say that when I think about my dog there is a dog ‘in’ my mind. Rather, it is just the idea that my dog is intrinsic to my thought, in the sense that what makes it the thought that it is is the fact that it has my dog as its object.
这里的事情最初看起来简单“有意的不存在”、“ 内容的关系“内在的客观性”这些短语,尽管它们之间存在表面上的差异但它们都是表达同一想法的不同方式心理现象涉及世界代表呈现“不存在”旨在表达这样一种想法,即思想的对象——思想是关于什么的——存在于 思考行为本身中。这并不是说 当我 想到我的 我的脑海 中“有一只狗 ”。相反,这只是我的狗是我思想的内在观念,从某种意义上说,使它成为这样的想法的原因是它以我的狗为对象。

I will start by understanding the idea of intentionality as simply as possible – as directedness on something. Contemporary philosophers often use the term ‘aboutness’ as a synonym for ‘intentionality’: thoughts have ‘aboutness’ because they are about things. (I prefer the term ‘directedness’, for reasons that will emerge in a moment.) The essence of Brentano’s claim is that what distinguishes mental phenomena from physical phenomena is that, whereas all mental phenomena exhibit this directedness, no physical phenomenon ex- hibits it. This claim, that intentionality is the ‘mark of the mental’, is sometimes called Brentano’s thesis
将从尽可能简单地理解意向性的概念开始——作为对某事的定向性。当代哲学家经常使用“关于”一词作为“意向性”的同义词:思想具有“关于”,因为它们是关于事物的。(我更喜欢“定向”这个词,原因稍后就会显现出来。布伦塔诺主张的精髓在于,心理现象与物理现象的区别在于,虽然所有心理现象都表现出这种定向性,但没有物理现象会抑制它。这种说法,即意向性是“心理的标志”,有时被称为布伦塔诺的论点
.

Before considering whether Brentano’s thesis is true, we need to clear up a couple of possible confusions about the term ‘intentional- ity’. The first is that the word looks as if it might have something to do with the ordinary ideas of intention, intending and acting inten- tionally. There is obviously a link between the philosophical idea of intentionality and the idea of intention. For one thing, if I intend to perform some action, A, then it is natural to think that I represent A (in some sense) to myself. So intentions may be representational (and therefore ‘intentional’) states.
考虑布伦塔诺的论点是否属实之前,我们需要澄清关于“意向性”一词几个可能的混淆首先这个词看起来似乎意图意图行动普通概念有关显然两者之间存在联系意向性的哲学思想和意图的思想。首先,如果我打算执行某个动作 A,那么很自然地会认为我代表了A(在某种意义上)对自己。因此,意图可能是具象的(因此是“意向性的”)状态。

But, apart from these connections, there is no substantial philo- sophical link between the concept of intentionality and the ordinary concept of intention. Intentions in the ordinary sense are intentional states, but most intentional states have little to do with intentions.
但是,除了这些联系之外,意向性的概念普通意向概念之间没有实质性的哲学联系通常意义上的意图意向性状态,但大多数意向性状态与意图几乎没有关系。

The second possible confusion is somewhat more technical. Beginners may wish to move directly to the next section, ‘Brentano’s thesis’ (see p. 36).
第二种可能的混淆在技术上更具技术性。初学者可能希望直接进入下一节,“布伦塔诺的论文”(见第 36 页)。

This second confusion is between intentionality (in the sense I am using it here) and intensionality, a feature of certain logical and lin- guistic contexts. The words ‘intensionality’ and ‘intentionality’ are pronounced in the same way, which adds to the confusion, and leads painstaking authors such as John Searle to specify whether they are talking about ‘intentionality-with-a-t’ or ‘intensionality-with- an-s’.26 Searle is right: intentionality and intensionality are different things, and it is important to keep them apart in our minds.
第二个混淆意向性(从某种意义说,我在这里使用它内涵性之间的淆,这是某些逻辑语言语境的一个特征。“intensionality”和“intentionality”这两个词的发音方式相同增加了 混乱,导致 John Searle 煞费苦心的作者指出他们是否谈论 '意向性与 a-t' '内涵性与- an-s'。26塞尔是对的:意向性和内涵不同的东西,在我们的脑海中将它们分开是很重要的。

To see why, we need to introduce some technical vocabulary from logic and the philosophy of language. A linguistic or logical context (i.e. a part of some language or logical calculus) is intensional when it is non-extensional. An extensional context is one of which the following principles are true:
要了解原因,我们需要从逻辑语言哲学中引入一些技术词汇语言逻辑上下文(即某种语言逻辑演算的一部分)在 ex tensional intension-intension-intension--的。 扩展上下文 满足以下原则的其中一种

the principle of intersubstitutivity of co-referring expressions;
共同指涉表达的相互实质性原则;

the principle of existential generalisation.
存在主义概括原则

The titles of these principles look rather formidable, but the logical ideas behind them are fairly simple. Let me explain.
这些原则的标题看起来相当令人生畏,但它们背后的逻辑思想相当简单。让我解释一下。

The principle (A) of intersubstitutivity of co-referring expressions is a rather complicated title for a very simple idea. The idea is just that if an object has two names, N and M, and you say something true about it using M, you cannot turn this truth into a falsehood by replacing M with N. For example, George Orwell’s original name was Eric Arthur Blair (he took the name Orwell from the River Orwell in Suffolk). Because both names refer to the same man, you cannot change the true statement: ...


乔治·奥威尔 (George Orwell) 写了《动物庄园》


通过用埃里克·阿瑟·布莱尔 (Eric Arthur Blair) 这个名字代替乔治·奥威尔 (George Orwell) 而陷入虚假。因为该语句:


埃里克·阿瑟·布莱尔Eric Arthur Blair) 撰写了《动物庄园》


同样正确(同样,埃里克·阿瑟·布莱尔(Eric ArthurBlair)代替乔治·奥威尔(George Orwell也不能 谎言变成真理——例如,“乔治·奥威尔写了《战争与和平》(Warand Peace)”。 背后的想法很简单因为这个人两种情况下都是一样的,所以的是真是假并不重要 来谈论他。


术语“乔治·奥威尔”“埃里克·阿瑟·布莱尔”“共同指称术语”:也就是说它们指的是同一个对象。原则 (A) 表示,这些术语可以相互替换,而不会改变它们出现的句子的真伪。(它 因此 有时被称为“替代性salva veritate原则——字面意思是“拯救真理”。


有什么 比这更简单的呢? 不幸的是,我们 不必走远寻找违反这个简单原则的情况 考虑一下某人——弗拉基米尔——相信 乔治·奥威尔写了《动物庄园》,奥威尔的一无所知。然后是声明:

Vladimir believes that George Orwell wrote Animal Farm
弗拉基米尔认为乔治·奥威尔写了《动物庄园》


true,而语句:


弗拉基米尔认为里克·阿瑟·布莱尔 (Eric Arthur Blair) 写了《动物庄园


false。 在这种情况下替换 共同指称的术语并不能保持真理。我们显然明显的共同指称术语的替代性原则已经失败了。然而,这个原则怎么会失败呢?这似乎是不言而喻的。


为什么这个原则 在某些情况下失败 ——特别是在关于信仰和某些其他心理状态的句子中——是语言哲学的一个主要关注点 然而,我们不需要详述这里失败的原因;只是为了定义内涵的概念而指出原则 (A) 的失败是非延伸性或内涵性的标志之一。


另一个标志是原则 (B),即“存在主义泛化”的失败。这个原则我们可以从关于某物的陈述中推断出它的存在。例如,从语句中:


奥威尔写了《动物庄园》


我们可以推断


有人写了 Animal Farm
.


也就是说,如果第一个陈述真的,那么第二个陈述也是真的


再一次,存在主义概括可能失败的一个突出例子是关于信念的陈述。声明


弗拉基米尔相信圣诞老生活在北极


可以是真的,下面的陈述无疑错误的:


弗拉基米尔认为有人住在北极


由于这两个陈述中的第一个可以是 true,而第二个是false,因此第二个逻辑上不能第一个得出。这是存在主义概括失败的一个例子。

To summarise: intensionality is a feature of sentences and lin- guistic items; a sentence is intensional when it is non-extensional; it is non-extensional when one or both of the two principles (A) and (B) can fail to apply. Notice that I say the principles can fail to apply, not that they must. Of course, there are many cases when we can substitute co-referring expressions in belief sentences; and there are many cases where we can conclude that something exists from
总结一下:内涵性是 句子语言项目的一个特征;当一个句子是非外延性的时,它是内涵性的; 两个原则 (A) (B) 中的一个或两个都不适用时,它是非延伸。请注意,我说的原则可能不适用,而不是它们必须适用。当然,很多情况下我们可以信念替换共指表达;很多情况下我们可以得出结论某物存在

a belief sentence which is about that thing. But the point is that we have no guarantee that these principles will hold for all belief sentences and other ‘intensional contexts’.
一个关于那个东西信念句子 关键,我们 不能保证这些原则适用于所有信念句子和其他 “内涵上下文”。

What has this intensionality got to do with our topic, intentional- ity? At first sight, there is an obvious connection. The examples that we used of sentences exhibiting intensionality were sentences about beliefs. It is natural to suppose that the principle of substitutivity of co-referring terms breaks down here because whether a belief sentence is true depends not just on the object represented by the believer, but on the way that the object is represented. Vladimir represents Orwell as Orwell, and not as Blair. So the intensionality seems to be a result of the nature of the representation involved in a belief. Perhaps, then, the intensionality of belief sentences is a consequence of the intentionality of the beliefs themselves.
这种内涵性与我们的主题 intentional- ity 有什么关系一看,明显的联系。我们使用表现出内涵性的句子的例子是关于信念的句子 自然假设 共指术语 替代性原则在这里被打破 ,因为一个信念句子是否 真实 不仅仅取决于宾语信徒代表但在 对象代表的方式上。 弗拉基米尔代表奥威尔作为奥威尔而不是布莱尔。因此,这种内涵性似乎信仰中涉及的表现性质的结果。那么,也许信念句子的内涵 性是信念本身的意向性的结果。

Likewise with the failure of existential generalisation. The failure of this principle in the case of belief sentences is perhaps a natural consequence of the fact (mentioned above) that representations can represent ‘things’ that don’t exist. The fact that we can think about things that don’t exist does seem to be one of the defining charac- teristics of intentionality. So, once again, perhaps, the intensionality of (for example) belief sentences is a consequence of the intentional- ity of the beliefs themselves.27
存在主义概括失败也是如此在信念句的情况下,这一原则的失败可能是如上所述表征可以代表不存在的“事物”这一事实的自然结果。我们可以思考不存在的事物这一事实似乎确实是意向性的定义特征之一所以,也许一次,(例如信念句子的内涵性是信念本身的意向性的结果。27

However, this is as far as we can go in linking the notions of intensionality and intentionality. There are two reasons why we cannot link the two notions further:
然而,这是我们 在将 内涵性和 意向性 的概念联系起来所能达到程度 我们不能进一步将这两个概念联系起来的原因有两个

There can be intensionality without intentionality (representa- tion). That is, there can be sentences which are intensional but do not have anything to do with mental representation. The best-known examples are sentences involving the notions of possibility and necessity. To say that something is necessarily so, in this sense, is to say that it could not have been otherwise. From the two true sentences,
可以性,而没有意向性(再现)。也就是说,可以有内涵但 心理表征无关的句子。 最著名的例子涉及可能性必然 性概念 句子某物然如此,在这个意义上,就是不可能否则。从这两个真实的句子来看,

Nine is necessarily greater than five The number of planets is nine
9 必然大于 5 行星9

we cannot infer that:
我们不能推断

The number of planets is necessarily greater than five
行星的数量必须大于5

since it is not necessarily true that there are nine planets. There could have been four planets, or none. So the principle of substitutivity of co-referring terms (‘nine’ and ‘the number of planets’) fails – but not because of anything to do with mental representation.28
因为颗行星不一定是真的可能 颗行星,或者没有。 因此共指术语(“九”行星的数量”)替代性原则失败了——但并不是因为与心理表征有任何关系28

There can be descriptions of intentionality which do not exhibit intensionality. An example is given by sentences of the form ‘X sees Y’. Seeing is a case of intentionality, or mental representa- tion. But, if Vladimir sees Orwell, then surely he also sees Blair, and the author of The Road to Wigan Pier, and so on. Principle ...

seems to apply to ‘X sees Y’. Moreover, if Vladimir sees Orwell, then surely there is someone whom he sees. So principle ...

applies to sentences of the form ‘X sees Y’.29 Not all descrip- tions of intentionality are intensional; so intensionality in the description is not necessary for intentionality to be described. ...

This last argument, (2), is actually rather controversial, but we don’t really need it in order to distinguish intentionality from intensionality. The first argument will do that for us on its own: in the terminology of necessary and sufficient conditions introduced earlier, we can say that intensionality is not sufficient for intention- ality, and it may not even be necessary. That is, since you can have intensionality without any mention of intentionality, intensionality is not sufficient for the presence of intentionality. This is enough to show that these are very different concepts, and that we cannot use intensionality as a criterion of intentionality.30 ...

Let’s now leave intensionality behind, and return to our main theme: intentionality. Our final task in this chapter is to consider Brentano’s thesis that intentionality is the ‘mark’ of the mental. ...

Brentano’s thesis ...

As I remarked earlier, Brentano thought that all and only mental ...

phenomena exhibit intentionality. This idea, Brentano’s thesis, has been very influential in recent philosophy. But is it true? ...

Let’s divide the question into two sub-questions: ...

Do all mental states exhibit intentionality? ...

Do only mental states exhibit intentionality? ...

Again the terminology of necessary and sufficient conditions is useful. The first sub-question may be recast: is mentality sufficient for intentionality? And the second: is mentality necessary for intentionality? ...

It is tempting to think that the answer to the first sub-question is No’. To say that all mental states exhibit intentionality is to say that all mental states are representational. But this line of thought goes we can know from introspection that many mental states are not representational. Suppose I have a sharp pain at the base of my spine. This pain is a mental state: it is the sort of state which only a conscious being could be in. But pains do not seem to be representa- tional in the way that thoughts are pains are just feelings, they are not about or ‘directed upon’ anything. Another example: suppose that you have a kind of generalised depression or misery. It may be that you are depressed without being able to say what it is that you are depressed about. Isn’t this another example of an intentional state without directedness on an object? ...

Let’s take the case of pain first. First, we must be clear about what we mean by saying that pain is a mental state. We sometimes call a pain ‘physical’ to distinguish it from the ‘mental’ pain of (say) the loss of a loved one. These are obviously very different kinds of mental state, and it is wrong to think that they have very much in common just because we call them both ‘pain’. But this fact doesn’t make the pain of (say) a toothache any less mental. For pain is a state of consciousness: nothing could have a pain unless it was conscious, and nothing could be conscious unless it had a mind. ...

Does the existence of sensations refute the first part of Brentano’s thesis, that mentality is sufficient for intentionality? Only if it is true that they are wholly lacking in any intentionality. And this does not ...

seem to be true.31 Although we would not say that my back pain is ‘about’ anything, it does have some representational character in so far as it feels to be in my back. I could have a pain that feels exactly the same, ‘pain-wise’, but is in the top of my spine rather than the base of my spine. The difference in how the two pains feel would purely be a matter of where they are felt to be. To put the point more vividly: I could have two pains, one in each hand, which felt exactly the same, except that one felt to be in my right hand, and the other felt to be in my left hand. This felt location is plausibly a difference in intentionality – in what the mental state is ‘directed on’ – so it is not true that pains (at least) have no intentionality whatsoever. ...


当然, 并不意味着痛苦意义上的命题式态度 因为它们 不是针对 情境的。对痛苦 归属——“奥斯瓦尔多感到痛苦”——并不适合 作为归属 标准的“As that S”形式命题态度。但是,心理状态不是一种命题性态度这一事实并不意味着不是有意的,因为正如我们已经看到的,并不是所有的思想或有意的心理状态都是命题性的态度(爱是我们之前的例子)。如果我们这里所做的一般方式来理解“表征性特征”或意向性的概念很难否认痛苦具有表征性。


另一个例子, 无方向抑郁或痛苦嗯,当然,有一种抑郁症的事情,患有抑郁症的人无法确定他们抑郁的原因是什么。但这本身并不意味着这种抑郁没有对象,它没有指向性。首先,国家主体必须能够识别其对象不能成为某物是有意状态的标准——否则某些形式的 自欺 欺人是不可能的。但是, 更重要的是, 这种情绪描述为不针对任何事物,这错误地描述了它。因为任何类型的抑郁症通常都是外部世界的彻底消极”——刘易斯·沃尔珀特 (Lewis Wolpert) 的经济用语来说。32 对于“与任何特别无关”萧条以及具有确定的、易于识别的对象萧条,都是如此


广泛性抑郁症是一种体验世界的方式——一切似乎都很糟糕,没有什么 值得做的, 抑郁症患者的世界“缩小”。也就是说广泛性抑郁症是一种人的思想指向世界的方式——因此是有意的——因为世界“一般”仍然可以成为某种心理状态的对象。


因此,任何完全非故意心理状态并不明显 然而, 心理状态可能仍然存在非故意的属性特征:例如,尽管我的牙痛确实有意针对我的牙齿,但它可能具有独特的唠叨品质,即根本不是故意的:唠叨不是针对任何东西的,它就在那里。这些明显的特性有时被称为 qualia 如果 疼痛这样的感觉 具有这些特性,那么 感觉可能存在故意的残留元素即使这种感觉作为一个 整体的心理状态是有意的。因此,即使布伦塔诺论文的第一部分对整个心理状态是正确的——它们都是有意识的——心理生活中可能仍然存在非有意的因素这将布伦塔诺论文的一场胜利。

So much, then, for the idea that mentality is sufficient for inten- tionality. But is mentality necessary for intentionality? That is: is it true that if something exhibits intentionality, then that thing is (or has) a mind? Are minds the only things in the world that have intentionality? This is more tricky. To hold that minds are not the only things that have intentionality, we need to give an example of something that has intentionality but doesn’t have a mind. And it seems that there are plenty of examples. Take books. This book contains many sentences, all of which have meaning, represent things and therefore have intentionality in some sense. But the book doesn’t have a mind.
那么,心态足以达到本能性的观点就这么多了。 但是,心态意向性的必要条件吗? 也就是说: 如果某物表现出意向性,那么就是(或具有)思想,这是真的吗?思想是世界上唯一具有意向性的事物吗?这更棘手。要坚持心智不是唯一具有意向性的事物我们需要一个例子,说明有意向性但没有心智的事物。而且似乎 有很多 例子。书籍为例。 这本书包含许多句子,所有这些句子都有意义代表事物因此某种意义具有意向性这本书没有思想。


对此,自然的回答是采用我在上面讨论语言表示时使用的思路 也就是说,我们应该说这本书句子本质上 并不具有意向性只是因为它们读者解释而具有意向性这本书。然而,读者的心理状态所提供的解释确实具有内在的意向性。


在这方面,哲学家有时会通过谈论 “原始”“衍生”意向性 区分 书籍思想之间的区别。一本书中的意向性仅仅派生意向性: 来自那些写这本书和这本书的人的思想但是我们的思想具有原始的意向性:他们的意向性依赖于源自其他任何事物的意向性。33


因此,我们可以将我们的问题重新构建如下:除了思想之外,其他事物都能具有原始意向性吗?这个问题非常令人费解。 它的一个问题是如果我们遇到表现出原始意向性的事物,就很难看出 事物是否有思想如何成为一个进一步的问题 那么 ,我们是想说,只有我们所知道的思想才能表现出原始意向性吗? 这里的困难在于开始 看起来像 一个单纯的规定:例如如果我们发现计算机能够进行原始意向性,我们很可能会说:多么神奇!计算机可以有思想!或者我们可以决定以不同的方式使用这些术语,说:'多么神奇! 某物可以没有思想而具有原始意图!这两种反应之间的差异在很大程度上似乎是一个术语问题第 3 章中,我将对这个问题有更多要说的。


布伦塔诺论文的第二部分——心态是 意向性的必要条件 ——引入了一些令人费解的问题总体轮廓来看似乎非常合理。然而, 我们应该保留它的判断直到我们更多地发现拥有心是什么。


结论:再现心灵


先驱10 号的星际“信”的例子使代表性令人费解的性质成为焦点。在那之后,考虑了图像再现,以及图像再现相似理论,因为这种表现乍一看似乎比其他种类更简单。但这种表象是骗人的。不仅 相似性似乎 建立代表性微不足道的基础,而且图片需要解释。解释

seems necessary for linguistic representation too. And I then sug- gested that interpretation derives from mental representation, or intentionality. To understand representation, we need to understand representational states of mind. This is the topic of the next chap- ter.
似乎对于 语言表示 也是必要的 然后提出解释 来自 心理表征或意向性。理解表征,我们需要理解表征的心理状态。这是下一章的主题

Further reading
延伸阅读

Chapter 1 of Nelson Goodman’s Languages of Art (Indianapolis, Ind.: Hackett 1976) is an important discussion of pictorial representation. Ian Hacking’s Why Does Language Matter to Philosophy? (Cambridge: Cambridge University Press 1975) is a very readable semi-historical ac- count of the relation between ideas and linguistic representation. A good introduction to the philosophy of language is Alex Miller’s Philosophy of Language (London: UCL Press 1997). More advanced is Richard Larson and Gabriel Segal, Knowledge of Meaning: an Introduction to Semantic Theory (Cambridge, Mass.: MIT Press 1995) which integrates ideas from recent philosophy of language and linguistics. An excellent collection of essential readings in this area of the philosophy of language is A.W. Moore (ed.) Meaning and Reference (Oxford: Oxford University Press 1993). For more on the idea of intentionality, see Chapter 1 of my Elements of Mind (Oxford: Oxford University Press 2001). An important discussion is Robert Stalnaker’s Inquiry (Cambridge, Mass.: MIT Press 1984), Chapters 1 and 2. John Searle’s Intentionality (Cambridge: Cambridge University Press 1983) is an accessible book on the phenomena of intentionality. A useful collection of essays, many of them quite technical, on the idea of a ‘propositional attitude’ is Nathan Salmon and Scott Soames (eds.), Propositions and Attitudes (Oxford: Oxford University Press 1988). The best one-volume collection of readings in the philosophy of mind in general is still David Rosenthal (ed.), The Nature of Mind (Oxford: Oxford University Press 1990). For further reading on consciousness, see Chapter 6 below (pp. 231–232).
纳尔逊·古德曼 (Nelson Goodman) 的《 艺术语言》(Indianapolis,Ind.: Hackett1976)的第一是对图像表现的重要讨论。伊恩·哈金 (Ian Hacking) 的《为什么 语言哲学很重要 (剑桥:剑桥大学出版社,1975 年)一本非常可读 半历史作,记录思想语言表征之间的关系 历克斯·米勒 (Alex Miller) 的《语言哲学》(伦敦:伦敦大学学院出版社 1997 年)是语言哲学一个很好的介绍。更高级的是理查德·拉森(Richard Larson)和加布里埃尔·西格尔(Gabriel Segal)的《意义知识:语义理论导论》(Knowledge ofMeaning:an Introduction toSemanticTheory)(剑桥,马萨诸塞州:麻省理工学院出版社,1995年),整合近代语言哲学语言学的思想 A.W. 是语言哲学 这一领域优秀的基本读物 Moore (ed.) Meaning and Reference (Oxford: Oxford University Press, 1993).有关意向性概念的更多信息,请参阅我的 Elements of Mind(牛津:牛津大学出版社,2001 年)的第一章。一个重要的讨论是罗伯特·斯塔尔纳克 (Robert Stalnaker) 的《探究》(马萨诸塞州剑桥:麻省理工学院出版社,1984 年),第 1 章和第 2 章。约翰·塞尔 (John Searle) 的《意向性》(剑桥:剑桥大学出版社,1983 年)是一本关于 意向性 现象的通俗易懂 关于 “命题态度” 概念有用论文 ,其中许多 相当技术性的,是 Nathan Salmon 和 Scott Soames(编辑),命题和态度》(牛津:牛津大学)Press1988)。 一般来说最好的单卷 本心灵哲学读物 仍然是大卫·罗森塔尔(David Rosenthal)(编辑)的《心灵本质》(The Nature of Mind)(牛津:牛津大学出版社 1990 年)。有关意识的进一步阅读,请参阅下面的第 6 章(第 231-232 页)。

2


了解思想家及其思想

I
已经说过,理解表征,我们必须理解思想。但是,我们对思想真正了解多少呢?或者,就此而言,我们对一般的心智了解多少?


你可能会 这是一个只有大脑科学才能真正回答的问题但是,如果这是真的,那么大多数人思想思想的了解很少了。 毕竟大多数人 都没有研究大脑,即使对专家来说,大脑的某些方面仍然非常神秘。因此,如果我们必须了解大脑功能的细节才能理解心智, 那么我们很少有了解心智。


但是,在某种意义上,我们确实对心智大量的了解。事实上,头脑对我们来说如此熟悉,以至于这个事实一开始可以逃避注意。我的意思是,我们知道我们有思想、经历、记忆、梦、感觉和情绪,我们知道其他人也有它们。我们非常清楚各种心理状态之间的细微差别 – 例如,希望和期望之间,或者遗憾和懊悔之间。这种关于心智的知识被用于理解他人。我们日常生活的大部分时间都取决于我们对他人想法的了解,而我们通常非常擅长了解这是什么。我们通过观察他们、倾听他们、与他们交谈和了解他们的性格来了解其他人在想什么。这种人的了解往往使我们能够预测他们将做什么——通常其准确性会让气象局感到羞愧。


这里要考虑的是非常普通 “预测” 情况。例如,假设您打电话给一位朋友并安排明天与她共进午餐。我猜想(取决于朋友是谁)我们中的许多人对朋友会出现的信心比我们对天气预报信心更大。然而,在制作这个

42

Understanding thinkers and their thoughts


“预测”我们依赖于我们对她思想了解——理解对她说的话,她知道餐厅在哪里一起吃午饭,等等


所以,至少 从这个意义上 我们都是 心智专家但请注意,这本身并不意味着心智与大脑不同。因为我们认为这些心理状态(如欲望、理解等)与我们了解很多的事实是完全一致的。 最终只是大脑生化状态 如果 这样的话,那么我们对心智 的知识也将是关于大脑的知识 – 尽管对我们来说可能不是这样。


幸运的是,我们不必为了弄清楚我们对心智的了解而解决心智是否是大脑的问题。解释为什么不,需要谈谈臭名昭著的 “身心问题”。


身心问题


身心问题是身心如何相互连接的问题我们知道它们当然是相互关联的 我们知道人们的大脑受损时,他们的思考能力就会发生变化。我们都知道,当人们服用麻醉药或 饮酒 过多时这些身体活动会影响大脑,进而影响他们的思想。我们的思想构成我们身体物质显然是相关的——但如何呢?

One reason this is a problem is because, on the one hand, it seems obvious that we must just be entirely made up of matter and, on the other hand, it seems obvious that we cannot just be made up of mat- ter; we must be something more. We think we must just be matter, for example, because we believe that human beings have evolved from lower forms of life, which themselves were made entirely from matter when minds first evolved, the raw material out of which they evolved was just complex matter. And it is plausible to believe that we are entirely made up of matter for example, if all my mat- ter were taken away, bit by bit, there would be nothing of me left.
这是一个问题的一个原因是方面,我们似乎很明显必须完全物质组成另一方面似乎很明显我们不能仅仅物质组成;我们必须成为更多的东西。例如,我们认为我们一定是物质因为 我们相信 人类是从低级生命形式进化而来的低级生命本身完全由物质构成——思想第一次进化时, 他们从中进化出来原材料只是复杂的物质。而且,相信我们完全物质组成合理的——例如,如果我的所有物质都被一点一点地拿走,我就什么都没有留下了。

43

But it seems so hard to believe that we are, underneath it all, just matter just a few dollars’ worth of carbon, water and some miner- als. It is easy for anyone who has experienced the slightest damage to their body to get the sense that it is just incredible that this fragile, messy matter constitutes their nature as thinking, conscious agents. Likewise, although people sometimes talk of the ‘chemistry’ that occurs between people who are in love, the usage is obviously metaphorical the idea that love itself is literally ‘nothing but a complex chemical reaction’ seems just absurd.
似乎很难相信,在这一切的背后我们只是物质——只是价值美元的碳、水和一些矿工。任何经历过身体最轻微伤害的人都很容易 感觉到 这种脆弱、凌乱物质构成了他们作为思考、有意识的代理人本质这真是不可思议。同样,尽管人们有时会谈论恋爱中的人之间发生的“化学反应”,但其用法显然是隐喻性的—— 本身实际上“ 不过是一种复杂的化学反应”的想法似乎很荒谬。

I once heard a (probably apocryphal) story that illustrates this feeling.1 According to the story, some medical researchers in the 1940s discovered that female cats who were deprived of magnesium in their diet stopped caring for their offspring. This was reported in a newspaper under the headline, ‘Motherlove is magnesium’. Whether the story is true doesn’t matter what matters is why we find it funny. Thinking of our conscious mental lives as ‘really’ being com- plex physical interactions between chemicals seems to be as absurd as thinking of motherlove as ‘really’ being magnesium.
曾经听过一个(可能是杜撰的)故事说明这种感觉。 根据这个故事,1940 年代的一些医学研究人员发现饮食中被剥夺了镁不再照顾它们的后代。一家报纸的标题为“母爱镁”,对此进行了报道 这个故事是否 真实并不重要——重要的是 我们为什么觉得它有趣。 我们有意识的精神生活视为“真正”化学物质之间的复杂物理相互作用这似乎将母爱视为“真正”是镁一样荒谬。

Or is it? Scientists are finding more and more detailed correla- tions between psychological disorders and specific chemicals in the brain.2 Is there a limit to what they can find out about these correlations? It seems a desperate last resort to insist, from a posi- tion of almost total ignorance, that there must be a limit. For we just don’t know. Perhaps the truth isn’t as simple as ‘motherlove is magnesium’ – but may it not be too far away from that?
或者这样吗? 科学家 们发现 心理障碍大脑中的特定化学物质之间存在越来越 详细的联系 他们找到 的关于这些相关性的信息 限制吗?从 几乎完全无知 的姿态出发,坚持必须有一个 度,这似乎是绝望的最后手段。 因为我们就是不知道。也许真相并不像 “母爱是镁 ”那么简单——但与此相去不远吗?


所以我们先被拖向一个方向,然后是另一个方向。当然,我们对自己 我们只是物质,复杂的方式组织起来; 但是 仔细想想,我们似乎 不可能只是物质,对我们来说必须有更多的东西。简而言之,这是表达身心问题的一种方式。事实证明,这是 哲学 中最棘手的问题之一——以至于一些哲学家认为这是不可能解决的。17世纪英国哲学家约瑟夫·格兰维尔(Joseph Glanvill,1636-1680)尖锐表达这个想法:“更纯洁的精神如何与这块土块结合在一起,对于堕落的人类来说,这是一个难以解开的结。


其他人则更为乐观,为此提供了解决方案


问题。 一些——唯物主义者物理主义者——认为尽管我们的感受 恰恰相反, 但有可能证明 心智只是复杂的物质:心智只是 某种复杂的方式组织的大脑问题。其他人认为心不可能只是物质,必须是其他东西某种其他种类的东西 例如那些相信我们有“非物质”灵魂的人,这些灵魂在我们的 身体 死亡幸存下来他们必须否认我们的思想和我们的身体是同一回事。因为,如果我们的思想和我们的身体是一样的,那么它们怎么在这些身体的毁灭 幸存下来呢?这些哲学家二元论者,因为他们认为有两种主要的东西 – 物质和心理。(现在一个不太常见的解决方案是声称一切都归根结底是心理的:这是 理想主义

Materialism, in one of its many varieties, tends to be the ortho- dox approach to the mind–body problem these days. Dualism is less common, but still defended vigorously by its proponents.3 In Chapter 6 (‘Consciousness and physicalism’), I will return to this problem, and will attempt to make it more precise and to outline what is at issue between dualism and materialism. But, for the time being, we can put the mind–body problem to one side when investigating the problem of mental representation. Let me explain.
唯物主义是其众多变体之一,往往是当今解决身心问题的正统方法二元论不太常见,仍然受到支持者的大力捍卫第六 (“意识物理主义”)中,我将 回到 这个问题,并试图使其更精确,并概述二元论唯物主义之间的问题。但是,就目前而言在研究心理表征问题时,我们可以把身心问题放在一边。让我解释一下。

The problem about mental representation can be expressed very simply: how can the mind represent anything at all? Suppose for the moment that materialism is true: the mind is nothing but the brain. How does this help with the problem of mental representa- tion? Can’t we just rephrase the question and ask: how can the brain represent anything at all? This seems just as hard to understand as the question about the mind. For all its complexity, the brain is just a piece of matter, and how a piece of matter can represent anything else seems just as puzzling as how a mind can represent something – whether that mind is a piece of matter or not.
关于心理表征的问题可以非常简单地表达出来: 心理怎么代表任何事物 呢? 假设 唯物主义真实的: 思想只不过是 大脑。 这对解决 心理表征问题有什么帮助呢?我们难道不能改写这个问题问:大脑怎么代表任何东西吗?这似乎和关于心的问题一样难以理解。尽管它很复杂,但大脑只是一块物质,一块物质如何代表其他任何东西似乎与头脑如何代表某物一样令人费解——无论那个思想是否是一块物质。

Suppose for a moment that materialism is true, and think about what is inside your head. There are about 100 billion brain cells. These form a substance of a grey and white watery consistency resembling yoghurt. About a kilogram of this stuff constitutes your brain. If materialism is true, then this yoghurty substance alone
假设唯物主义是真实的,然后想想 脑子里是什么 大约 1000 亿细胞。这些形成一种类似于酸奶的 白色 水状度物质大约公斤这种东西构成了你的大脑。如果唯物主义真的,那么只有这种酸奶物质

enables you to think about yourself, your life and the world. It enables you to reason about what to do. It enables you to have expe- riences, memories, emotions and sensations. But how? How can this watery yoghurty substance this ‘clod’ constitute your thoughts?
让您 思考 关于您自己、您的生活 世界。 它使你能够推理做什么。使你能够拥有经验、记忆、情感感觉。但是怎么做呢?这种水状的酸奶物质——这个“土块”——是如何构成的想法的?

On the other hand, let’s suppose dualism is true: the mind is not the brain but is something else, distinct from the brain, like an ‘immaterial soul’. Then it seems that we can pose the same question about the immaterial soul: how can an immaterial soul represent anything at all? Descartes believed that mind and body were distinct things: the mind was, for Descartes, an immaterial soul. He also thought that the essence of this soul is to think. But to say that the essence of the soul is to think does not answer the question ‘How does the soul manage to think?’. In general, it’s not very satisfactory to respond to the question ‘How does this do that?’ with the answer ‘Well, it’s because it’s in the essence (or nature) of this to do that’. To think that that’s all there is to it would be to be like the famous doctor in Molière’s play, Le Malade imaginaire, who answered the question of how opium sends you to sleep by saying that it has a virtus dormitiva or a ‘dormitive virtue’, i.e. it is in the essence or nature of opium to send one to sleep. ...

Both materialism and dualism, then, need a solution to the prob- lem of representation. The upshot is that answering the mind–body problem with materialism or dualism does not by itself solve the problem of representation. For the latter problem will remain even when we have settled on materialism or dualism as an answer to the former problem. If materialism is true, and everything is matter, we still need to know what is the difference between thinking matter and non-thinking matter. And if dualism is true, then we still need to know what it is about this non-material mind that enables it to think. ...

(On the other hand, if idealism is true, then there is a sense in which everything is thought, anyway, so the problem does not arise. However, idealism of this kind is much harder to believe to put it mildly – than many philosophical views, so it looks as if we would be trading one mystery for another.) ...

This means that we can discuss the main issues of this book with- ...


必须决定唯物主义还是二元论身心问题的正确解决方案。唯物主义/二元论的争论与我们的问题没有直接关系。就本章而言,这是一件好事。因为,虽然我们不知道心智和大脑之间的关系是什么但这里感兴趣的我们对一般心智特别是思想了解这就是其余部分的主题。我们将在第 6 章回到身心问题。


理解他人的思想


那么,我们对心灵了解多少呢?解决这个问题的一种方法是问:“我们如何发现心智?当然,这些不是同一个问题。(比较问题,“什么 我们了解以及'我们如何了解水?但是,正如我们将看到的,就心智而言,询问 我们如何知道将对我们所知道产生相当大的启示。

One thing that seems obvious is that we know about the minds of others in a very different way from the way we know our own minds. We know about our own minds partly by introspecting. If I am trying to figure out what I think about a certain question, I can concentrate on the contents of my conscious mind until I work it out. But I can’t concentrate in the same way on the contents of your mind in figuring out what you think. Sometimes, of course, I cannot tell what I really think, and I have to consult others a friend or a therapist, perhaps about the significance of my thoughts and actions, and what they reveal about my mind. But the point is that learning about one’s own mind is not always like this, whereas learning about the minds of others always is.
有一点似乎很明显,那就是我们了解他人心智的方式与我们了解自己心智的方式截然不同。我们部分是通过内省来了解自己的思想。如果我试图弄清楚我对某个问题的看法,我可以专注于 意识的内容直到弄清楚它。 但是不能同样的方式专注于心中的内容,以弄清楚的想法。当然,有时无法说出真实的想法,不得不咨询其他人——也许朋友治疗师——关于的想法和行为的重要性 ,以及它们揭示了我的思想。但关键是,了解 自己的心并不总是 这样,而了解他人的心总是这样

The way we know about the states of mind of others is not, so to speak, symmetrical to the way we know our own states of mind. This ‘asymmetry’ is related to another important asymmetry: the differ- ent ways we use to know about the position of our own bodies and the bodies of others. In order to know whether your legs are crossed, I have to look, or use some other form of observation or inspection (I could ask you). But I don’t need any sort of observation to tell
可以说,我们了解他人心理状态的方式与我们了解自己心理状态的方式并不对称 这种“不对称”与另一个重要的不对称性有关:我们用来了解自己身体和他人身体位置的不同方式为了知道你的双腿是否交叉,我必须观察,或使用其他形式的观察或检查(我可以你)。不需要任何形式的观察判断

me whether my legs are crossed. Normally, I know this immediately, without observation. Likewise, I can typically tell what I think with- out having to observe my words and watch my actions. Yet I can’t tell what you think without observing your words and actions.
我是否双腿交叉。通常,我不用观察就能立即知道这一点。同样,我通常可以说出我的想法——而不必观察我的言语行动然而如果不观察你的言行,就无法判断的想法

Where the minds of others are concerned, it seems obvious that all we have to go on is what people say and do: their observable behaviour. So how can we get from knowledge of people’s observ- able behaviour to knowledge of what they think?
在关心他人的思想方面,很明显我们 要做的就是人们的言行:他们可观察到的行为。那么,我们如何才能从人们可观察行为的了解到对他们的想法的了解呢?

A certain sort of philosophical scepticism says that we can’t. This is ‘scepticism about other minds’, and the problem it raises is known as ‘the problem of other minds’. This will need a brief digression. According to this sceptical view, all that we really know about other people are facts about their observable behaviour. But it seems possible that people could behave as they do without having minds at all. For example, all the people you see around you could be robots programmed by some mad scientist to behave as if they were conscious, thinking people: you might be the only real mind around. This is a crazy hypothesis, of course: but it does seem to be compatible with the evidence we have about other minds.
某种 哲学怀疑主义 我们不能。这就是他人思想的怀疑”引发的问题被称为 他人思想的问题”。 这需要 简短的题外话。根据这种怀疑论观点,我们对他人的真正了解都是关于他们可观察行为的事实似乎人们可以在根本没有思想的情况下做出他们的行为。例如,你周围看到的所有人都可能是某个疯狂科学家编程的机器人,让他们的行为就像是有意识、有思想的人一样:你可能是周围唯一真实的头脑。当然这是一个疯狂的假设:但它似乎确实与我们所拥有的关于其他思想的证据是相容的。

Compare scepticism about other minds with scepticism about the existence of the ‘external world’ (that is, the world outside our minds). This kind of scepticism says that, in forming your beliefs about objects in the world, all you really have to go on is the evidence of your senses: your beliefs formed on the basis of experi- ences. But these experiences and beliefs could be just as they are, yet the ‘external’ world be very different from the way you think it is. For example, your brain could be kept in a vat of nutrients, its input and output nerves being stimulated by a mad scientist to make it appear that you are experiencing the world of everyday objects. This too is a crazy hypothesis: but it also seems to be compatible with your experience. ...4

These versions of scepticism are not meant to be philosophically tenable positions: there have been few philosophers in history who have seriously held that other people do not have minds. What scep- ticism does is force us to uncover what we really know, and force us to justify how we know it. To answer scepticism, we need to give an ...

account of what it is to know something, and therefore account for what we ‘really’ know. So the arguments for and against scepticism belong properly to the theory of knowledge (called epistemology) and lie outside the scope of this book. For this reason, I’m going to put scepticism to one side. My concern in this book is what we believe to be true about our minds. In fact, we all believe that we know a lot about the minds of others, and I think we are undoubt- edly right in this belief. So let us leave it to the epistemologists to tell us what knowledge is – but whatever it is, it had better allow the obvious fact that we know a lot about the minds of others. ...

Our question, then, is about how we come to know about other minds – not about whether we know. That is, given that we know a lot of things about the minds of others, how do we know these things? One aspect of the sceptical argument that seems hard to deny is this: all we have to go on when understanding other people is their observable behaviour. How could it be otherwise? Surely we do not perceive other people’s thoughts or experiences we perceive their observable words and their actions.5 So the question is: how do we get from the observable behaviour to knowledge of their minds? One answer that was once seriously proposed is that the observable behaviour is, in some sense, all there is to having a mind: for ex- ample, all there really is to being in pain is ‘pain-behaviour’ (crying, moaning complaining, etc.). This view is known as behaviourism, and it is worth starting our examination of our knowledge of minds with an examination of behaviourism. ...

Though it seems very implausible, behaviourism was, for a short time in the twentieth century, popular in both psychology and the philosophy of mind.6 It gives a straightforward answer to the question of how we know the minds of others. But it makes the question of how we know our own minds very problematic, because, as I noted above, we can know our own minds without observing our behaviour. (Hence the popular philosophical joke, repeated ad nauseam to generations of students: two behaviourists meet in the street; one says to the other, ‘You’re feeling pretty well today, how am I feeling?’.) This aspect of behaviourism goes hand in hand with its deliberate disregard (or even its outright denial) of subjective, ...

conscious experience what it’s like, from the inside, to have a mind. ...

I
不想关注行为主义的这些缺点,这些缺点在许多其他关于心灵哲学的书籍中都有详细讨论。我想集中讨论的是行为主义的内在不足:事实上,即使用它自己的术语来说,它也不能纯粹地从行为的角度来解释关于心灵的事实。
7


行为主义的一个明显的最初反对我们有很多想法根本没有在行为中表现出来。例如,我相信里加是拉脱维亚的首都,尽管我从未在任何行为中表达这种信念那么,行为主义否认这种信念吗?不。行为主义信念不需要实际的行为,而 需要行为的倾向。它会将信念比作一种性情,比如一块糖的溶解度一块即使从未放入水中也可以溶解;块的溶解性在于它在 放入水中溶解类似地,相信里加拉脱维亚的首都就是被倾向以某种方式行事。

This seems more plausible until we ask what this ‘certain way’ is. What is the behaviour that relates to the belief that Riga is the capital of Latvia as the dissolving of the sugar relates to its solubil- ity? One possibility is that the behaviour is verbal: saying ‘Riga is the capital of Latvia’ when asked the question ‘What is the capital of Latvia?’. (So asking the question would be analogous to putting the sugar in water.)
似乎合理直到我们 这个 “确定的方式 ”是什么。认为里加是拉脱维亚的首都,因为溶解它的溶解性有关,这种行为是什么?一种可能性是行为是口头的:当被问到“拉脱维亚的首都是什么”时,他说“里加是拉脱维亚的首都”。(所以问这个问题就像把糖放在水里。

Simple as it is, this suggestion cannot be right. For I will only answer ‘Riga is the capital of Latvia’ to the question ‘What is the capital of Latvia?’ if, among other things, I understand English. But understanding English is not a precondition for believing that Riga is the capital of Latvia: plenty of monoglot Latvians have true beliefs about their capital. So understanding English must be a dis- tinct mental state from believing that Riga is the capital of Latvia, and this too must be explained in behavioural terms. Let’s bypass the question of whether understanding English can be explained in purely behaviourist terms to which the answer is without doubt ‘No’8 – and pursue this example for a moment.
尽管很简单但这个建议不可能是 正确的。 因为我 只会回答“里加拉脱维亚 首都这个问题以及拉脱维亚的首都是什么”这个问题。 如果我 能听懂英语。但懂英语并不是相信里加拉脱维亚首都的先决条件:许多单一语言拉脱维亚人他们的首都有真实的信仰。因此,理解英语必须与相信里加是拉脱维亚的首都截然不同的心理状态,这也 必须 行为术语来解释 让我们绕过理解英语是否可以用纯粹的行为主义术语来解释的问题—— 答案 疑是“不”——并追究这个例子。

Suppose that the behaviourist explanation of my understanding of the sentence ‘Riga is the capital of Latvia’ is in terms of my disposition to utter the sentence. This disposition cannot, obviously, just be the disposition to make the sounds ‘Riga is the capital of Latvia’: a parrot could have this disposition without understanding the sentence. What we need (at least) is the idea that the sounds are uttered with understanding, i.e. certain utterances of the sentence, and certain ways of responding to the utterance, are appropriate and others are not. When is it appropriate to utter the sentence? When I believe that Riga is the capital of Latvia? Not necessarily, as I can utter the sentence with understanding without believing it. Perhaps I utter the sentence because I want my audience to believe that Riga is the capital of Latvia, though I myself (mistakenly) believe that Vilnius is.
假设我对“里加 拉脱维亚首都句话的理解的行为主义解释 句话倾向显然, 这种性格不能只是发出“里加拉脱维亚的首都”的声音:一只鹦鹉可以在不理解句子的情况下有这种性格我们至少)需要的是这样一个想法,即声音是在理解的情况下说出的,即句子的某些话语和话语的某些回应方式合适的,而其他不合适。 什么时候 说出 句话是合适的?当 我相信里加拉脱维亚首都时? 不一定,因为我可以在不相信的情况下理解地说出这句话。 也许我说出这句话是因为我想让我的听众相信里加拉脱维亚首都尽管自己(错误地)相信维尔纽斯是。

But, in any case, the behaviourist cannot appeal to the belief that Riga is the capital of Latvia in explaining when it is right to utter the sentence, as uttering the sentence was supposed to explain what it is to have the belief. So this explanation would go round in circles. The general lesson here is that thoughts cannot be fully defined in terms of behaviour: other thoughts need to be mentioned too. Each time we try to associate one thought with one piece of behaviour, we discover that this association won’t hold unless other mental states are in place. And trying to associate each of these other mental states with other pieces of behaviour leads to the same problems. Your individual thought may be associated with many different pieces of behaviour depending on which other thoughts you have
但是,无论如何,行为主义者不能诉诸里加是拉脱维亚首都的信念来解释何时说出这句话是正确的,因为说出这句话应该解释拥有这种信仰是什么。所以这个解释会兜圈子。这里的一般教训是,思想不能完全用行为来定义:其他思想也需要被提及。每次我们试图将一个想法与一种行为联系起来时,我们发现除非其他心理状态到位,否则这种关联不会成立。试图将这些其他心理状态与其他行为联系起来会导致同样的问题。您的个人想法可能与许多不同的行为相关联,具体取决于您的其他想法
.

A simpler example will sharpen the point. A man looks out of a window, goes to a closet and takes an umbrella before leaving his house. What is he thinking? The obvious answer is that he thought that it was raining. But notice that, even if this is true, this thought would not lead him to take his umbrella unless he also wants to stay dry and he believes that taking his umbrella will help him stay dry and he believes that this object is his umbrella. This might seem so obvious that it hardly needs saying. But, on reflection, it is obvious that if he didn’t have these (doubtless unconscious) thoughts, it would be quite mysterious why he should take his umbrella when ...

he thought it was raining. Where this point should lead is, I think, clear: we learn about the thoughts of others by making reasoned conjectures about what makes sense of their behaviour. ...

However, as our little examples show, there are many ways of making sense of a piece of behaviour, by attributing to the thinker very different patterns of thought. How, then, do we choose between all the possible competing versions of what someone’s thoughts are? The answer, I believe, is that we do this by employing, or presup- posing, various general hypotheses about what it is to be a thinker. Take the example of the man and his umbrella. We could frame the following conjectures about what his state of mind is: ...

He thought it was raining, and wanted to stay dry (and, we hardly need to add, he thought his umbrella would help him stay dry and he thought this was his umbrella, etc.). ...

He thought it was sunny, and he wanted the umbrella to protect him from the heat of the sun (and he thought his umbrella would protect him from the sun and he thought this was his umbrella, etc.). ...

He had no opinion about the weather, but he believed that his umbrella had magical powers and he wanted to take it to ward off evil spirits (and he thought this was his umbrella, etc.). ...

He was planning to kill an enemy and believed that his umbrella con- tained a weapon (and he thought this was his umbrella, etc.). ...

All of these are possible explanations for why he did what he did, and we could think up many more. But, given that it actually is raining, and we know this, the first explanation is by far the most likely. Why? Well, it is partly because we believe that he can see what we see (that it’s raining) and partly because we think that it is a generally undesirable thing to get wet when fully clothed, and that people where possible avoid undesirable things when it doesn’t cost them too much effort . . . and so on. In short, we make certain as- sumptions about his view of his surroundings, his mental faculties, and his degree of rationality, and we attribute to him the thoughts it is reasonable for him to have, given those faculties. ...


许多心灵哲学家(以及一些心理学家习惯于我们在理解其他心灵所采用假设假设描述为一种理论其他思想的。他们这种理论为“常识心理学”或“民间心理学”。 这个想法 正如我们对物理世界的常识性知识建立在对物理对象特征行为的一些一般原则(“民间物理学”)的知识之上一样,我们对其他思想建立在关于人们特征行为的一些一般原则(“民俗心理学”)的知识之上。

I agree with the idea that our common-sense knowledge of other thinkers is a kind of theory. But I prefer the label ‘common-sense psychology’ to ‘folk psychology’ as a name for this theory. These are only labels, of course, and in one sense it doesn’t matter too much which you use. But, to my ear, the term ‘folk psychology’ carries the connotation that the principles involved are mere ‘folk wisdom’, homespun folksy truisms of the ‘many hands make light work’ variety. So, in so far as the label ‘folk psychology’ can sug- gest that the knowledge involved is unsophisticated and banal, the label embodies an invidious attitude to the theory. As we shall see, quite a lot turns on one’s attitude to the theory, so it is better not to prejudice things too strongly at the outset.
同意我们对其他思想家的常识性知识是一种理论的观点。但我更喜欢用“常识心理学”而不是“民间心理学”这个标签来作为这个理论的名称。当然,这些只是标签,从某种意义上说,您使用哪个标签并不重要。但是,在我看来,“民间心理学”一词的含义是,所涉及的原则只是“民间智慧”,是“多只手使轻松工作”的朴素的民俗老生常谈。因此,只要“民间心理学”这个标签可以表明所涉及的知识是简单和平庸的,那么这个标签就体现了对该理论的令人反感的态度。正如我们将看到的,相当多的人对这个理论的态度取决于他,所以最好不要在一开始就把事情看得太过强烈。
9

Since understanding why other thinkers do what they do is (more often than not) derived from knowledge of their observable behaviour, the understanding given by common-sense psychology is often called ‘the explanation of behaviour’. Thus, philosophers often say that the point or purpose or function of common-sense psychology is the explanation of behaviour. In a sense this is true – we are explaining behaviour in that we are making sense of the behaviour by attributing mental states. But, in another way, the expression ‘the explanation of behaviour’ is misleading, as it makes it look as if our main concern is always with what people are do- ing, rather than what they are thinking. Obviously, we often want to know what people are thinking in order to find out what they will do, or to make sense of what they have done – but sometimes it is pure curiosity that makes us want to find out what they are
由于理解 其他思考者为什么做 他们所做的事情通常 来自他们可观察行为的了解,因此常识心理学给出的理解通常被称为行为的解释”。 因此,哲学家经常常识心理学 重点目的功能是对行为的解释。从某种意义上说,这是真的 – 我们在解释行为,因为我们 通过归因心理状态来理解行为 但是,从另一个角度来看行为的解释这个表达具有误导性,因为它使人看起来好像我们主要关心的总是人们在做什么,而不是 他们在什么。显然,我们经常 想知道人们 什么以便找出 他们将做什么,或者弄清楚他们已经做了什么——但有时纯粹的好奇让我们想知道是什么他们

thinking. Here our interest is not in their behaviour as such, but in the psychological facts that organise and ‘lie behind’ the behaviour – those facts that makes sense of the behaviour.
思维。在这里,我们的兴趣不在于他们的行为本身,而在于组织和“隐藏行为背后的心理事实——那些使行为有意义的事实。

Behaviourists, of course, would deny that there is anything psychological lying behind behaviour. They could accept, just as a basic fact, that certain interpretations of behaviour are more natural to us than others. So, in our umbrella example, the behaviourist can accept that the reason that the man takes his umbrella is because he thought it was going to rain, and so on. This is the natural thing to say, and the behaviourist could agree. But since, according to behaviourism, there is no real substance to the idea that something might be producing the behaviour or bringing it about, we should not take the description of how the man’s thoughts lead to his be- haviour as literally true. We are ‘at home’ with certain explanations rather than others; but that doesn’t mean that they are true. They are just more natural for us.
当然,行为主义者否认行为背后有任何心理因素。他们可以接受,就像一个基本事实一样,对行为的某些解释对我们来说比其他解释自然所以,在我们的雨伞例子中,行为主义者可以接受 这个 的原因是因为他认为要下雨了,等等。这是很自然的说法, 行为主义者可能会同意。 但是,根据 行为主义,某物可能产生 导致行为的想法没有真正的实质内容,我们不应该把这个人的思想如何导致他的行为的描述看是真实的我们家里”某些解释而不是其他解释;但这并不意味着它们是真的。它们对我们来说更自然。

This view is very unsatisfactory. Surely, in understanding others, we want to know what is true of them, and not just which explana- tions we find it more natural to give. And this requires, it seems to me, that we are interested in what makes these explanations true – and therefore in what makes us justified in finding one explana- tion more natural than others. That is, we are interested in what it is that producing the behaviour or bringing it about. So to understand more deeply what is wrong with this behaviourist view, we need to look more closely at the idea of thoughts lying behind behaviour.
这个观点不令人满意。当然,理解他人时,我们想知道他们的真实情况,而不仅仅是我们发现给出哪些解释更自然。在我看来,这要求我们对是什么使这些解释成立感兴趣——因此,我们又对什么使我们有理由找到一个比其他解释自然的解释感兴趣。也就是说我们对产生行为或带来行为的是什么感兴趣。因此,为了更深入地理解这种行为主义观点的问题所在,我们需要更仔细地研究行为背后的思想观念。

The causal picture of thoughts
思想因果

One aspect of this idea is just the ordinary view, mentioned earlier, that we cannot directly perceive other people’s thoughts. It’s worth saying here that this fact by itself doesn’t make other people’s minds peculiar or mysterious. There are many things which we cannot per- ceive directly, which are not for that reason mysterious. Microbes, for example, are too small to be directly perceived; black holes are too dense even to allow light to escape from them, so we cannot directly perceive them. But our inability to directly perceive these ...

things does not in itself make them peculiar or mysterious. Black holes may be mysterious, but not just because we can’t see them. ...

However, when I say that thoughts ‘lie behind’ behaviour I don’t just mean that thoughts are not directly perceptible. I also mean that behaviour is the result of thought, that thoughts produce behaviour. This is how we know about thoughts: we know about them through their effects. That is, thoughts are among the causes of behaviour: the relation between thought and behaviour is a causal relation. ...

What does it mean to say that thoughts are the causes of behav- iour? The notions of cause and effect are among the basic notions we use to understand our world. Think how often we use the notions in everyday life: we think the government’s economic policy causes inflation or high unemployment, smoking causes cancer, the HIV virus causes AIDS, excess carbon dioxide in the atmosphere causes global warming, which will in turn cause the rising of the sea level, and so on. Causation is, in the words of David Hume (1711–1776), the ‘cement of the universe’.10 To say that thoughts are the causes of behaviour is partly to say that this ‘cement’ (whatever it is) is what binds thoughts to the behaviour they lie behind. If my desire for a drink caused me to go to the fridge, then the relation between my desire and my action is in some sense fundamentally the same as the relation between someone’s smoking and their getting cancer: the relation of cause and effect. That is, in some sense my thoughts make me move. I will call the assumption that thoughts and other mental states are the causes of behaviour the ‘causal picture of thought’. ...

Now, although we talk about causes and effects constantly, there is massive dispute among philosophers about what causation actually is, or even if there is any such thing as causation.11 So, to understand fully what it means to say that thoughts are the causes of behaviour, we need to know a little about causation. Here I shall restrict myself to some uncontroversial features of causation, and show how these features can apply to the relation between thought and behaviour. ...

First, when we say that A caused B, we normally commit our- selves to the idea that if A had not occurred, B would not have
首先,当我们说 A 导致了 B 时,我们通常会相信,如果A没有发生,B就不会发生

occurred. When we say, for example, that someone’s smoking caused their cancer, we normally believe that if they hadn’t smoked then they would not have got cancer. Philosophers put this by say- ing that causation involves counterfactuals: truths about matters ‘contrary to fact’. So we could say that, if we believe that A caused B, we commit ourselves to the truth of the counterfactual claim: ‘If A had not occurred, B would not have occurred’.
发生。例如当我们某人吸烟导致他们患上癌症时,我们通常会相信,如果他们没有吸烟,那么他们就不会上癌症。哲学家们说,因果关系涉及反事实:关于“与事实相反”的事物的真理所以我们可以如果我们相信A导致了 B,我们就承诺事实主张真实性:“如果 A 没有发生,B 就不会发生”。

Applied to the relation between thoughts and behaviour, this claim about the relation between counterfactuals and causation says this: if a certain thought – say, a desire for a drink – has a certain action drinking as a result, then if that thought hadn’t been there the action wouldn’t have been there either. If I hadn’t had the desire, then I wouldn’t have had the drink.
应用于 思想行为之间的关系,这个关于反事实因果关系主张是这样的:如果某个思想——比如说,对饮料的渴望——具有某种行为——饮酒——结果那么如果如果没有这个想法行动也不会在那里如果没有这个愿望,我就不会喝到那杯酒。

What we learned in the discussion of behaviourism was that thoughts give rise to behaviour only in the presence of other thoughts. So my desire for a drink will cause me to get a drink only if I also believe that I am actually capable of getting myself a drink, and so on. This is exactly the same as in non-mental cases of causation: for example, we may say that a certain kind of bacterium caused an epidemic, but only in the presence of other factors such as inadequate vaccination, the absence of emergency medical care and decent sanitation and so on. We can sum this up by saying that in the circumstances, if the bacteria hadn’t been there, then there wouldn’t have been an epidemic. Likewise with desire: in the cir- cumstances, if my desire had not been there, I wouldn’t have had the drink. That is part of what makes the desire a cause of the action.
我们在行为主义的讨论中学到的是,思想只有在其他思想存在的情况下才会产生行为因此我对饮料渴望只有在我也相信我真的有能力给自己喝一杯的情况下才会让我喝一杯,依此类推。这与非精神因果关系的情况完全相同:例如,我们可以说某种细菌引起了流行病,前提是存在其他因素,例如疫苗接种不足、缺乏紧急医疗护理体面的卫生设施等我们可以总结一下在这种情况下,如果细菌不存在,就不会有流行病。欲望也是如此:在各个领域,如果我的欲望不存在,我就不会喝酒。这就是使欲望成为行动原因的部分原因。

The second feature of causation I shall mention is the relation be- tween causation and the idea of explanation. To explain something is to answer a ‘Why?’-question about it. To ask ‘Why did the First World War occur?’ and ‘Explain the origins of the First World War’ is to ask pretty much the same sort of thing. One way in which ‘Why?’ questions can be answered is by citing the cause of what you want explained. So, for example, an answer to the question ‘Why did he get cancer?’ could be ‘Because he smoked’; an answer to ‘Why was there a fire?’ could be ‘Because there was a short-circuit’.
提及的因果关系第二个特征因果关系和解释概念之间的关系。解释某事就是回答“为什么?- 关于它的问题。问'为什么会发生第一次世界大“解释第一次世界大战起源要求几乎相同的事情。回答 “为什么 ”问题的一种方法是引用你想要解释的原因。因此,例如,“他为什么会患上癌症”这个问题的答案可以是“因为他抽烟”;Whywas there a fire?”的答案可以是“Because there was a short-circuit”。

It’s easy to see how this applies to the relation between thoughts
很容易看出如何适用于思想之间的关系

and behaviour, since we have been employing it in our examples so far. When we ask ‘Why did the man take his umbrella?’ and answer ‘Because he thought it was raining etc.’, we are (according to the causal picture) explaining the action by citing its cause, the thoughts that lie behind it.
行为,因为到目前为止我们 一直在 我们的示例中使用它 当我们 “那个人为什么 拿走他的伞? 并回答“因为他认为正在下雨等”,我们(根据因果关系)通过引用其原因、其背后的想法来解释该行为。

The final feature of causation I shall mention is the link between causation and regularities in the world. Like much in the contem- porary theory of causation, the idea that cause and regularity are linked derives from Hume. Hume said that a cause is an ‘object followed by another, and where all the objects, similar to the first, are followed by objects similar to the second’.12 So if, for example, this short-circuit caused this fire, then all events similar to this short-circuit will cause events similar to this fire. Maybe no two events are ever exactly similar; but all the claim requires is that two events similar in some specific respect will cause events similar in some specific respect. ...

We certainly expect the world to be regular. When we throw a ball into the air, we expect it to fall to the ground, usually because we are used to things like that happening. And if we were to throw a ball into the air and it didn’t come down to the ground, we would normally conclude that something else intervened that is, some other cause stopped the ball from falling to the ground. We expect similar causes to have similar effects. Causation seems to involve an element of regularity. ...

However, some regularities seem to be more regular than others. There is a regularity in my pizza eating: I have never eaten a pizza more than 20 inches in diameter. It is also a regularity that unsup- ported objects (apart from balloons etc.) fall to the ground. But these two regularities seem to be very different. For only modesty stops me from eating a pizza larger than 20 inches, but it is nature that stops unsupported objects from flying off into space. For this rea- son, philosophers distinguish between mere accidental regularities, like the first, and laws of nature, like the second. ...

So if there is an element of regularity in causation then there must be regularity in the relation between thought and behaviour – if this really is a causal relation. I’ll discuss the idea that there are such regularities, and what they may be like, in the next section. ...

Let’s draw these various lines of thought about causation and thought together. To say that thoughts cause behaviour is to say at least the following things: ...

The relation between thought and behaviour involves the truth of a counterfactual to the effect that, given the circumstances, if the thought had not been there, then the behaviour would not have been there. ...

To cite a thought, or bunch of thoughts, as the cause of a piece of behaviour is to explain the behaviour, since citing causes is one way of explaining effects. ...

Causes typically involve regularities or laws, so, if there is a causal relationship between thought and behaviour, then we might expect there to be regularities in the connection between thought and behaviour. ...

At no point have I said that causation has to be a physical rela- tion. Causation may be mental or physical, depending on whether what it relates (its ‘relata’) are mental or physical. So the causal picture of the mind does not entail physicalism or materialism. Nonetheless, the causal picture of thought is a key element in what I am calling the ‘mechanical’ view of the mind. According to this view, the mind is a causal mechanism: a part of the causal order of nature, just as the liver and the heart are part of the causal order of nature. And we find out about the minds of others in just the same way that we find out about the rest of nature: by their effects. The mind is a mechanism that has its effects in behaviour. ...

But why should we believe that mental states are causes of behaviour at all? After all, it is one thing to deny behaviourism but quite another to accept that mental states are causes of behaviour. This is not a trivial hypothesis, something that anyone would accept who understood the concept of a mental state. In fact, many philos- ophers deny it. For example, the view that mental states are causes of behaviour is denied by Wittgenstein and some of his followers. In their view, to describe the mind in terms of causes and mechanisms is to make the mistake of imposing a model of explanation which ...

is only really appropriate for non-mental things and events. ‘The mistake’, writes G.E.M. Anscombe, a student of Wittgenstein’s, ‘is to think that the relation of being done in execution of a certain intention, or being done intentionally, is a causal relation between act and intention’.13 ...

Why might someone think this? How might it be argued that mental states are not the causes of behaviour? Well, consider the example of the mental phenomenon of humour. We can distinguish between the mental state (or, more precisely, event) of being amused and the observable manifestations of that state: laughing, smiling and so on. We need to make this distinction, of course, because someone can be silently amused, and someone can pretend to be amused and convince others that they are genuinely amused. But does this distinction mean that we have to think of the inner state of being amused as causing the outward manifestations? The opponents of the causal view of the mind say not. We should, rather, think of the laughing (in a genuine case of amusement) as the expression of amusement. Expressing amusement in this case should not be thought of as an effect of an inner state, but rather as partially constituting what it is to be amused. To think of the inner state as causing the external expression would be as misleading as thinking of some hidden facts that a picture (or a piece of music) expresses. As Wittgenstein puts it, ‘speech with and without thought is to be compared with the playing of a piece of music with or without thought’.14 ...

This may help give some idea of why some philosophers reject the causal picture of thought. Given this opposition, we need rea- sons for believing in the causal picture of thought. What reasons can be given? Here I shall mention two reasons that support the causal picture. The first argument derives from ideas of Donald Davidson’s.15 The second is a more general and ‘ideological’ argu- ment it depends on accepting a certain picture of the world, rather than accepting that a certain conclusion decisively follows from a certain set of indisputable premises. ...

The first argument is best introduced with an example. Consider someone, let’s call him Boleslav, who wants to kill his brother. Let’s
第一个参数最好通过一个例子来介绍考虑一个人,让我们Boleslav,杀死他的兄弟。让我们

suppose he is jealous of his brother, and feels that his brother is frustrating his own progress in life. We could say that Boleslav has a reason for killing his brother we might not think it is a very good reason, or a very moral reason, but it is still a reason. A reason (in this sense) is just a collection of thoughts that make sense of a certain plan of action. Now, suppose that Boleslav is involved in a bar-room brawl one night, for reasons completely unconnected to his murderous plot, and accidentally kills a man who, unknown to him, is his brother (perhaps his brother is in disguise). So Boleslav has a reason to kill his brother, and kills his brother, but does not kill his brother for that reason
假设他嫉妒他的兄弟,觉得他的兄弟阻碍了他自己的人生进步。我们可以说博莱斯拉夫杀死他的兄弟是有原因的——我们可能认为这不是一个很好的理由,或者是一个非常道德的理由,但它仍然是一个理由。原因(在这个意义上)只是理解某个行动计划的想法的集合。现在,假设博莱斯拉夫在一天晚上卷入了一场酒吧斗殴,原因与他的谋杀阴谋完全无关,并意外杀死了一个他不知道的男人,这个人是他的兄弟(也许他的兄弟是伪装的)。所以博列斯拉夫有理由杀死他的兄弟,并杀死了他的兄弟,但没有因为这个原因杀死他的兄弟
.

Compare this alternative story: Boleslav wants to kill his brother, for the same reason. He goes into the bar, recognises his brother and shoots him dead. In this case, Boleslav has a reason for killing his brother, and kills his brother for that reason.
比较这个替代故事:Boleslav杀死他的兄弟,原因相同走进酒吧,认出了他的兄弟并开枪打死了他。在这种情况下,博列斯拉夫杀死他的兄弟是有原因的,并且正是因为这个原因杀死了他的兄弟。

What is the difference between the two cases? Or, to put it an- other way, what is involved in performing an action for a reason? The causal picture of thoughts gives an answer: someone performs an action for a reason when their reason is a cause of their action. So, in the first case, Boleslav’s fratricidal plan did not cause him to kill his brother, even though he did have a reason for doing so, and he did perform the act. But, in the second case, Boleslav’s fratricidal plan was the cause of his action. It is the difference in the causation of Boleslav’s behaviour that distinguishes the two cases.
这两种情况有什么区别?或者,换句话说,出于某种原因执行操作涉及什么?思想的因果图画给出了一个答案:当某人的理性是他们行为的原因时,他们出于某种原因执行一个动作 。因此,第一种情况下,博列斯拉夫的自相残计划并没有导致杀死他的兄弟,即使他这样做是有原因的,而且他确实实施了这一行为。正是 Boleslav 行为的因果关系的差异区分了这两种情况。

How plausible is it to say that Boleslav’s reason (his murder- ous bunch of thoughts) was the cause of the murder in the second case but not in the first? Well, remember the features of causation mentioned above; let’s apply two of them to this case. (I shall ignore the connection between mental causation and laws this will be discussed in the next section.)
博列斯拉夫的理由 (他那一堆杀人的想法)是第二起案件中谋杀的原因,而不是第一起案件中谋杀的原因,这有多道理呢?好吧,记住上面提到的因果关系的特征;让我们将其两个应用于这种情况。(我将忽略心理因果关系定律之间的联系——这将 在下一节中讨论。

First, the counterfactual feature: it seems right to say that, in the first case, other things being equal (i.e. keeping all the other circumstances the same as far as possible), if Boleslav had not had the fratricidal thoughts, then he would still have killed his brother. Killing his brother in the brawl is independent of his fratricidal thoughts. But in the second case this is not so.
首先,反事实特征: 似乎可以说第一种情况下,其他条件相同(即尽可能保持所有其他情况相同),如果博莱斯拉夫没有自相残杀的想法,那么他仍然会杀死他的兄弟。 斗殴杀死他的兄弟 他的自相残杀的想法无关。但在第二种情况下,情况并非如此。

Second, the explanatory feature of causation. When we ask ‘Why did Boleslav kill his brother?’ in the first case, it is not a good answer to say ‘Because he was jealous of his brother’. His jealousy of his brother does not explain why he killed his brother in this case; he did not kill his brother because of the fratricidal desires that he had. In the second case, however, killing his brother is explained by the fratricidal thoughts: we should treat them as the cause.
第二, 因果关系解释性特征 当我们问“为什么博莱斯拉夫了他的兄弟?第一种情况下,说 '因为他嫉妒他的兄弟' 不是一个好的答案。他对兄弟的嫉妒并不能解释为什么他在这种情况下杀死了他的兄弟;没有因为他自相残杀的欲望而杀死他的兄弟。然而,在第二种情况下,杀死他的兄弟可以用自相残杀的想法来解释:我们应该把他们当作原因。

What the argument claims is that we need to distinguish between these two sorts of case, and that we can distinguish between them by thinking of the relation between reason and action as a causal relation. And this gives us an answer to the question: what is it to do something for a reason, or what is it to act on a reason? The answer is: to act on a reason is to have that reason as a cause of one’s action.
论证声称我们需要区分这两种情况,并且我们可以通过将理性和行动之间的关系视为因果关系来区分它们。这为我们提供了以下问题的答案: 出于某种原因做某事是什么或者根据某种原因采取行动是什么? 答案是:根据 一个理由采取行动 就是这个理由作为一个 人行动的原因。

I think this argument is persuasive. But it is not absolutely compelling. For the argument itself does not rule out an alternative account of what it is to act on a reason. The structure of the argu- ment is as follows: here are two situations that obviously differ; we need to explain the difference between them; appealing to causation explains the difference between them. This may be right – but notice that it does not rule out the possibility that there is some other even better account of what it is to act on a reason. It is open, therefore, to the opponent of the causal picture of thought to respond to the argument by offering an alternative account. So the first argument will not persuade this opponent.
我认为这个论点很有说服力。但这并不是绝对令人信服的。因为论证本身并不排除对根据理性采取行动的另一种解释。论证的结构如下:这是两种明显不同的情况;我们需要解释它们之间的区别;诉诸因果关系解释了它们之间的区别。这可能是对的——但请注意,它并不排除这样一种可能性,即还有其他更好的解释来说明根据一个原因采取行动是什么。因此,思想因果图景的反对者可以通过提供另一种解释来回应这一论点。所以第一个论点不会说服这个对手。

However, it is useful to see this argument of Davidson’s in its historical context. The argument is one of a number of arguments which arose in opposition to the view above that I attributed to Wittgenstein and his followers: the view that it is a mistake to think of the mind in causal terms at all. These other arguments aimed to show that there is an essential causal component in many mental concepts. For example, perception was analysed as involving a causal relation between perceiver and the object perceived; memory was analysed as involving a causal relation between the memory and the fact remembered; knowledge and the relation between language and reality was thought of as fundamentally based on ...

causal relations.16 Davidson’s argument is part of a movement which analysed many mental concepts in terms of causation. Against this background, I can introduce my second argument for the causal picture of thought. ...

The second argument is what I call the ideological argument. I call it this because it depends upon accepting a certain picture of the world, the mechanical/causal world picture. This picture sees the whole of nature as obeying certain general causal laws the laws of physics, chemistry, biology, etc. – and it holds that psychology too has its laws, and that the mind fits into the causal order of nature. Throughout nature we find causation, the regular succession of events and the determination of one event by another. Why should the mind be exempt from this sort of determination? ...

After all, we do all believe that mental states can be affected by causes in the physical world: the colours you see, the things you smell, the food you taste, the things you hear – all of these experi- ences are the result of certain purely mechanistic physical processes outside your mind. We all know how our minds can be affected by chemicals stimulants, antidepressants, narcotics, alcohol and in all these cases we expect a regular, law-like connection between the taking of the chemical drug and the nature of the thought. So if mental states can be effects, what are supposed to be the reasons for thinking that they cannot also be causes? ...

I admit that this falls a long way short of being a conclusive argument. But it’s hard to see how you could have a conclusive philosophical argument for such a general, all-embracing view. What I am going to assume here, in any case, is that, given this overall view of the non-mental world, we need some pretty strong positive reasons to believe that the mental world does not work in the same sort of way. ...

Common-sense psychology ...

So much, for the time being, for the idea that mental states are the causes of behaviour. Let’s now return to the idea of common-sense psychology: the idea that when we understand the minds of others, ...

we employ (in some sense) a sort of ‘theory’ which characterises or describes mental states. Adam Morton has called this idea the ‘Theory Theory’ of common-sense psychology i.e. the theory that common-sense psychology is a theory and I’ll borrow the label from him.17 To understand this Theory Theory, we need to know what a theory is, and how the common-sense psychology theory applies to mental states. Then we need to ask about how this theory is supposed to be employed by thinkers. ...


大多数一般意义上,我们可以 理论视为一种原则或原则 的集合 旨在解释某些现象。因此,要有一个心理状态理论,就需要 一组解释心理现象的原则。就常识心理学 而言,这些原则可能就像老生常谈一样简单例如,人们通常会试图实现他们欲望的对象(其他事物相等或者如果一个人以良好的光线看着他/她面前的物体,那么他/她通常会相信该物体在他/她面前(其他条件相同)(这些真理的明显琐碎将在下面讨论。

However, in the way it is normally understood, the claim that common-sense psychology is a theory is not just the claim that there are principles which describe the behaviour of mental states. What is meant in addition to this is that mental states are what philosophers call ‘theoretical entities’.18 That is, it is not just that mental states are describable by a theory, but also that the (true, complete) theory of mental states tells us everything there is to know about them. Compare the theory of the atom. If we knew a collection of general principles that described the structure and behaviour of the atom, these would tell us everything we needed to know about atoms in general – for everything there is to know about atoms is contained within the true complete theory of the atom. (Contrast colours: it’s arguably false that everything we know about colours is contained within the physical theory of colours. We also know what colours look like, which is not something that can be given by having knowledge of the theory of colours.19) Atoms are theoretical entities, not just in the sense that they are posits of a theory, but also
然而,按照 通常的理解方式, 声称常识心理学是一种 理论,而不仅仅是 声称有描述心理状态行为的原则 除此之外 心理状态哲学家所说的“理论实体”。 18 也就是说,不仅 心理状态可以用理论来描述而且(真正的、完整的)心理状态理论也告诉我们了解他们比较原子的理论如果我们知道一组描述原子结构和行为的一般原则,这些原则就会告诉我们我们需要了解的关于原子的一切——因为关于原子的一切知识都包含 原子的真正完整理论 (对比颜色:可以说,我们关于颜色的一切都包含在颜色 的物理理论 是错误的 我们也 知道颜色是什么样子的,这不是通过了解颜色理论来提供的东西19原子理论实体,不仅它们是理论假设意义上而且

because their nature is exhausted by the description of them given by the theory. Likewise, according to the Theory Theory, all there is to know about, say, belief is contained within the true complete theory of belief.
因为他们的本质被理论对他们的描述所耗尽 同样,根据 理论理论,所有关于信仰的知识都包含在真正完整的信仰理论中。

An analogy may help to make the point clear.20 Think of the theory as being rather like a story. Consider a story which goes like this: ‘Once upon a time there was an man called King Lear, who had three daughters, called Goneril, Regan and Cordelia. One day he said to them . . . ’ and so on. Now, if you ask, ‘Who was King Lear?’, a perfectly correct answer would be to paraphrase some part of the story: ‘King Lear is the man who divided his kingdom, disinherited his favourite daughter, went mad, and ended up on a heath’ and so on. But if you ask, ‘Did King Lear have a son? What happened to him?’ or ‘What sort of hairstyle did King Lear have?’, the story gives no answer. But it’s not that there is some fact about Lear’s son or his hairstyle which the story fails to mention; it’s rather that everything there is to know about Lear is contained within the story. To think there might be more is to misunderstand the story. Likewise, to think that there is more to atoms than is contained within the true complete theory of atoms is (on this view of theories) to fail to ap- preciate that atoms are theoretical entities.
一个类比可能有助于阐明这一点20这个理论看作是一个故事。考虑一个故事是这样的:“从前有一个人,名叫李尔王,他有三个女儿,分别是戈纳里尔、里根和科黛莉亚。有一天,他对他们说......' 等。现在,如果你问“李尔王是谁”,一个完全正确的答案是转述故事的某个部分:“李尔王是那个分裂王国的人,他剥夺最喜欢的女儿的继承权疯了,最终走上了荒地”等等。但是,如果你问,'李尔王有儿子吗?他怎么了?“或”李尔王有什么样的发型“,这个故事没有给出答案。但这并不是说故事中没有提到一些关于李尔的儿子或他的发型的事实;相反,关于李尔的一切都包含在故事中。认为可能还有更多误解了这个故事。同样,认为原子的 内容超过了真正的完整原子理论所包含的内容 (根据这种理论观点),就不能 认定原子是理论实体。

The analogy with common-sense psychology is this. The theory of belief, for example, might say something like: ‘There are these states, beliefs, which causally interact with desires to cause actions
与常识心理学的类比是这样的。 例如 ,信念理论可能会 这样的话:“ 这些状态,信念,它们引起行动的欲望因果相互作用

. . . ’ and so on, listing all the familiar facts about beliefs and their relations to other mental states. Once all these familiar facts have been listed, the list gives a ‘theoretical definition’ of the term ‘be- lief’. The nature of beliefs will be, on this view, entirely exhausted by these truisms about beliefs. There is no more to beliefs than is contained within the theory of belief; and likewise with other kinds of thought.21
. . . .“等等,列出了所有关于信念及其与其他心理状态关系的熟悉事实。一旦列出了所有这些熟悉的事实,该列表就给出了“be-believef”一词的“理论定义”。根据这种观点信仰本质完全被这些关于信仰的老生常谈所耗尽。信仰没有比信仰理论所包含的更多的东西;其他种类的思想也是如此21

It is important to distinguish, in principle, the idea that common- sense psychology is a theory from the causal picture of thoughts as such. One could accept the causal picture of thoughts – which, re- member, is simply the claim that thoughts have effects in behaviour – without accepting the idea that common-sense psychology is a
原则上常识心理学是一种理论的观点与思想本身的因果图景区分开很重要的。人们可以接受思想的因果图景——回想一下,这只是声称思想行为影响——而不接受常识心理学

theory (see ‘Theory versus simulation’, p. 77). It would also be pos- sible to deny the causal theory of thoughts – to deny that thoughts have effects – while accepting the conception of common-sense psychology as a theory. This view could be held by someone who is sceptical about the existence of causation, for example – though this would be quite an unusual view.
理论(参见“理论与模拟”,第 77 页)。否认思想的因果理论——否认思想有效果——同时接受常识心理学的概念作为一种理论,也是可能的。例如,对因果关系的存在持怀疑态度的人可能持有这种观点——尽管这将是一个相当不寻常的观点。

Bearing this in mind, we need to say more about how the Theory Theory is supposed to work, and what the theory says that thoughts are. Let’s take another simple everyday example. Suppose we see someone running along an empty pavement, carrying a number of bags, while a bus overtakes her, approaching a bus stop. What is she doing? The obvious answer is: she is running for the bus. The reflections earlier in this chapter should make us aware that there are alternatives to the obvious answer: perhaps she thinks she is being chased by someone, or perhaps she just wants to exercise. But, given the fact that the pavement is otherwise empty, and the fact that people don’t normally exercise while carrying large bags, we draw the obvious conclusion.
牢记这一点我们需要更多地说明理论理论应该如何运作,以及理论是怎么说思想的。 让我们再举一个简单的日常例子。 假设我们看到有人在空荡荡的人行道上奔跑,背着一些包,一辆公共汽车超过了她,接近一个公共汽车站。 她在做什么?显而易见的答案是:她正在奔向公交车。 前面的思考应该让我们 意识到除了显而易见答案之外,还有其他选择也许认为正在被某人追赶或者只是想要锻炼。但是,考虑到 人行道上空无一人的事实 以及 人们 通常不会在背着锻炼的事实我们得出了明显的结论。

As with our earlier example, we rule out the more unusual in- terpretations because they don’t strike us as reasonable or rational things for the person to do. In making this interpretation of her behaviour, we assume a certain degree of rationality in the woman’s mind: we assume that she is pursuing her immediate goal (catching the bus), doubtless in order to reach some long-term goal (getting home). We assume this because these are, in our view, reasonable things to do, and she is using reasonable ways to try and do them (as opposed to, say, lying down in the middle of the road in front of the bus and hoping that the bus driver will pick her up).
与我们 前面的例子一样,我们排除 了更不寻常的解释,因为它们不会让我们觉得这是人应该做的合理或理性的事情 她的行为进行 这种解释时,我们假设女性的心目中有一定程度理性我们假设正在追求她的直接目标(赶公共汽车),无疑是为了达到一些长期目标(回家)。 我们之所以假设是因为 我们认为 这些合理的事情,而且她正在使用合理的方法来尝试去做这些事情(而不是比如说,公共汽车前面的马路中间,希望公共汽车司机这样做把她捡起来)。

To say this is not to deny the existence of irrational and crazy be- haviour. Of course not. But if all behaviour was irrational and crazy, we would not be able to make these hypotheses about what is going on in people’s minds. We would not know how to choose between one wild hypothesis and another. In order for the interpretation of other thinkers to be possible in general, then, we have to assume that there is a certain regularity in the connection between thought and behaviour. And if the relation between people’s thoughts and ...

their behaviour is to be regular enough to allow interpretation, then it is natural to expect that common-sense psychology will contain generalisations which detail these regularities. In fact, if common- sense psychology really is a theory, this is what we should expect anyway – for a theory is (at the very least) a collection of general principles or laws. ...

So the next question is: are there any psychological gener- alisations? Scepticism about such generalisations can come from a number of sources. One common kind of scepticism is based on the idea that, if there were psychological generalisations, then surely we (as ‘common-sense psychologists’) should know them. But, in fact, we are very bad at bringing any plausible generalisations to mind. As Adam Morton says, ‘principles like “anyone who thinks there is a tiger in this room will leave it” are . . . almost always false’.22 And when we do actually succeed in bringing to mind some true gener- alisations, they can turn out to be rather disappointing – consider our earlier example: ‘People generally try to achieve the object of their desires (other things being equal)’. We are inclined to say: ‘Of course! Tell me something I didn’t know!’. Here is Morton again: ...

The most striking thing about common-sense psychology . . . is the combination of a powerful and versatile explanatory power with a great absence of powerful or daring hypotheses. When one tries to come up with principles of psychological explanation generally used in everyday life one only finds dull truisms, and yet in particular cases, interesting brave and acute hypotheses are produced about why one person . . . acts in some particular way.23 ...

There is obviously something right about this point; but perhaps it is a little exaggerated. After all, if the Theory Theory is right about common-sense psychology, we are employing this theory all the time when we interpret one another. So it will be hardly surprising if we find the generalisations that we use ‘truistic’. They will be truistic because they are so familiar – but this does not mean that they are not powerful. Compare our everyday theory of physical ob- jects ‘folk physics’. We know that solid objects resist pressure and ...


被其他物体穿透。从某种意义上说,这是不言而喻的,但它的不言而喻影响着我们与物体世界的所有交往。

Another way in which the defender of the Theory Theory can respond is by saying that it is only the assumption that we have some knowledge of a psychological theory of other minds that can satisfactorily explain how we manage to interpret other people so successfully. However, this knowledge need not be explicitly known by us that is, we need not be able to bring this knowledge to our conscious minds. But this unconscious knowledge like the mathematical knowledge of Meno’s slave which was discussed in Chapter 1 (see ‘Thought and consciousness’, p. 26) is nonetheless there. And it explains how we understand each other, just as (say) unconscious or ‘tacit’ knowledge of linguistic rules explains how we understand language. (We will return to this idea in Chapter 4.)
理论理论 捍卫 可以回应另一种方式 只有 假设我们对其他思想的心理学理论有一些了解,才能令人满意地解释 我们设法 如此成功地诠释了其他人 然而,这些知识不需要我们明确地知道——也就是说,我们不需要能够将这些知识带到我们的意识中。这种无意识的知识——就像第一 中讨论美诺奴隶数学知识一样(见“思想意识”,26页)——尽管如此,还是在那里。它解释了我们如何理解彼此,就像(比如)无意识“隐性”的语言规则知识解释了我们如何理解语言一样。(我们将在第 4 章中回到这个想法。

So far, then, I have claimed that common-sense psychology oper- ates by assuming that people are largely rational, and by assuming the truth of certain generalisations. We might not be able to state all these generalisations. But given that we know some of them even the ‘dull truisms’ – we can now ask: what do the generalisations of common-sense psychology say that thoughts themselves are?
因此,到目前为止人们声称常识心理学的运作方式是假设人们在很大程度上是理性的,并假设某些概括真理我们可能无法陈述所有这些概括。但是,鉴于我们知道其中的一些——甚至是“枯燥的真理”——我们现在可以问:常识心理学的概括是怎么说思想本身的

Let’s return to the example of the woman running for the bus. If someone were to ask why we interpret her as running for the bus, one thing we might say is: ‘Well, it’s obvious: the bus is coming’. But, when you think about it, this isn’t quite right. For it’s not the fact that the bus is coming which makes her do what she does, it’s the fact that she thinks that the bus is coming. If the bus were coming and she didn’t realise it, then she wouldn’t be running for the bus. Likewise, if she thought the bus was coming when in fact it wasn’t (perhaps she mistakes the sound of a truck for the sound of the bus), she would still run.
让我们回到那个女人跑公交车的例子。 如果有人问我们为什么把她解释为跑公交车, 我们可能会 '嗯, 很明显: 公交车'。但是,仔细想想,这并不完全正确。因为不是公共汽车要来的事实所做的事情,而是她认为公共汽车要来的事实。如果公共汽车要来了,而她没有意识到,那么她就不会跑向公共汽车。同样,如果认为公共汽车要来,但实际上并没有(也许她把卡车的声音误认为是公共汽车的声音),她仍然会跑。

In more general terms, what people do is determined by how they take the world to be, and how a thinker takes the world to be is not always how the world is (we all make mistakes). But to say that a thinker ‘takes’ the world to be a certain way is just another way of saying that the thinker represents the world as being a certain way. So what thinkers do is determined by how they represent the world
一般地说,人们做什么取决于他们如何看待世界,而思想家如何看待世界并不总是这样的(我们都会犯错)。但是,说一个思想家 “采取 ”世界是某种方式,只是另一种说法,即思想者把世界表现为某种方式。因此思想家做什么取决于他们如何代表世界

to be. That is, according to common-sense psychology, the thoughts which determine behaviour are representational
成为。也就是说,根据常识心理学,决定行为的思想是表征的
.

Notice that it is how things are represented in thought that matters to common-sense psychology, not just what objects are represented. Someone who thinks the bus is coming must represent the bus as a bus, and not (for example) just as a motorised vehicle of some kind for why should anyone run after a motorised vehicle of some kind? Or consider Boleslav: although he killed his brother in the first scenario, and represented his brother to himself in some way, he did not represent his brother as his brother, and this is why his desire to kill his brother is not the cause of the murder. (Recall the example of Orwell in Chapter 1: ‘Intentionality’.)
请注意对于常识心理学来说,重要的是事物思想中的表现方式,而不仅仅是表现了什么对象。认为公共汽车要来的人必须将公共汽车表示为公共汽车而不是例如)只是某种机动车辆——因为为什么有人追赶一辆机动车辆某种或者考虑一下波列斯拉夫:虽然他在第一种情况下杀死了他的兄弟,并以某种方式代表了他的兄弟,但他并没有将他的兄弟代表为他的兄弟,这就是为什么杀死兄弟愿望不是导致谋杀。(回想一下奥威尔在第一章“意向性”中的例子。

The other central part of the common-sense conception, at least according to the causal picture of thoughts, is that thoughts are the causes of behaviour. The common-sense conception says that, when we give an explanation of someone’s behaviour in terms of beliefs and desires, the explanation cites the causes of the behaviour. When we say that the woman is running for the bus because she believes that the bus is coming and wants to go home on the bus, this because expresses causation, just as the because in ‘He got cancer because he smoked’ expresses causation.
常识性概念的另一个核心部分,至少根据思想的因果图景,是思想是行为的原因常识性概念当我们从信仰和欲望的角度来解释某人的行为时解释会引用行为的原因当我们说这个女人跑去坐公交车,因为她相信 公交车 了, 公交车回家 因为表达了因果关系, 就像'他得到了cancer because he smoked' 表示因果关系。

Combining the causal picture of thought with the Theory Theory, we get the following: common-sense psychology contains generalisations which describe the effects and potential effects of having certain thoughts. For instance: the simple examples we have discussed are examples in which what someone does depends on what he or she believes and what he or she wants or desires. So the causal picture-plus-Theory Theory would say that common-sense psychology contains a generalisation or bunch of generalisations about how beliefs and desires interact to cause actions. A rough attempt at formulating a generalisation might be:
思想 因果图景理论理论相结合我们得到以下内容:常识心理学包含描述拥有某些想法的效果潜在影响概括 例如我们讨论的简单例子 某人 做什么取决于 他或她的信仰以及他或她想要或渴望什么的例子。因此,因果图观加理论 常识心理学包含关于信念欲望如何 相互作用引起行动概括一堆 概括 粗略地尝试构建一个概括可能是:

Beliefs combine with desires to cause actions which aim at the satis- faction or fulfilment of those desires.24
信念欲望相结合,导致旨在满足或满足这些欲望的行动。24

So, for example, if I desire a glass of wine, and I believe that there is some wine in the fridge, and I believe that the fridge is in the
所以,例如,如果我想喝一杯葡萄酒,我相信冰箱有一些葡萄酒并且相信冰箱里有

kitchen, and I believe the kitchen is over there, these will cause me to act in a way that aims at the satisfaction of the desire: for example, I might move over there towards the fridge. (For more on this, see Chapter 5: ‘Representation and success in action’.) ...

Of course, I might not even if I had all these beliefs and this desire. If I had another, stronger, desire to keep a clear head, or if I believed that the wine belonged to someone else and thought I shouldn’t take it, then I may not act on my desire for a glass of wine. But this doesn’t undermine the generalisation, since the generalisa- tion is compatible with any number of desires interacting to bring about my action. If my desire to keep a clear head is stronger than my desire to have a drink, then it will be the cause of a different action (avoiding the fridge, going for a bracing walk in the country, or some such). All the generalisation says is that one will act in a way that aims to satisfy one’s desires, whatever they are. ...

It’s worth stressing again that trains of thought like these are not supposed to run through one’s conscious mind. Someone who wants a drink will hardly ever consciously think, ‘I want a drink; the drink is in the fridge; the fridge is over there; therefore I should go over there’ and so on. (If this is what he or she is consciously thinking, then it is probably unwise to have another drink.) The idea is rather that there are unconscious thoughts, with these representational contents, which cause a thinker’s behaviour. These thoughts are the causal ‘springs’ of thinkers’ actions, not necessarily the occupants of their conscious minds. ...

Or that’s what the causal version of the Theory Theory says; it’s now time to assess the Theory Theory. In assessing it, we need to address two central questions. First, does the Theory Theory give a correct account of our everyday psychological understanding of each other? That is, is it right to talk about common-sense psychol- ogy as a kind of theory at all, or should it be understood in some other way? (Bear in mind that to reject the Theory Theory on these grounds is not ipso facto to reject the causal picture of thoughts.) ...

The second question is, even if our everyday psychological understanding of each other is a theory, is it a good theory? That is, suppose the collection of principles and platitudes about beliefs and ...

desires causing actions (and so on), which I am calling common- sense psychology, is indeed a theory of human minds; are there any reasons for thinking that it is a true theory of human minds? This might seem like an odd question but, as we shall see, one’s attitude to it can affect one’s whole attitude to the mind. ...

It will be simplest if I take these questions in reverse order. ...

The science of thought: elimination or vindication? ...

Let’s suppose, then, that common-sense psychology is a theory: the theory of belief, desire, imagination, hope, fear, love and the other psychological states which we attribute to one another. In calling this theory common-sense psychology, philosophers implicitly contrast it with the scientific discipline of psychology. Common- sense psychology is a theory whose mastery requires only a fairly mature mind, a bit of imagination and some familiarity with other people. In this sense, we are all psychologists. Scientific psychology, however, uses many technical concepts and quantitative methods which only a small proportion of ‘common-sense psychologists’ understand. But both theories claim, on the face of it, to be theories of the same thing the mind. So how are they related? ...

It won’t do simply to assume that in fact scientific psychology and common-sense psychology are theories of different things sci- entific psychology is the theory of the brain, while common-sense psychology is the theory of the mind or the person. There are at least three reasons why this won’t work. First, for all that we have said about these theories so far, the mind could just be the brain. As I said in Chapter 1, this is a question we can leave to one side in discussing thought and mental representation. But, whatever conclusion we reach on this, we certainly should not assume that just because we have two theories, we have two things. (Compare: common-sense says that the table is solid wood; particle physics says that the table is mostly empty space. It is a bad inference to conclude that there are two tables just because there are two theories.25) ...

Second, scientific psychology talks about a lot of the same kinds of mental states as we talk about in common-sense psychology. ...

Scientific psychologists attempt to answer questions such as: How does memory work? How do we see objects? Why do we dream? What are mental images? All these mental states and events – memory, vision, dreaming and mental imagery are familiar to common-sense psychology. You do not have to have any scientific qualifications to be able to apply the concepts of memory or vision. Both scientific and common-sense psychology have things to say about these phenomena; there is no reason to assume at the outset that the phenomenon of vision for a scientific psychologist is a dif- ferent phenomenon of vision for a common-sense ‘psychologist’. ...

Finally, a lot of actual scientific psychology is carried out without reference to the actual workings of the brain. This is not normally because the psychologists involved are Cartesian dualists, but rather because it often makes more sense to look at how the mind works in large-scale, macroscopic terms – in terms of ordinary behaviour – before looking at the details of its neural implementation. So the idea that scientific psychology is concerned only with the brain is not true even to the actual practice of psychology. ...

Given that scientific psychology and common-sense psychology are concerned with the same thing the mind the question of the relationship between them becomes urgent. There are many approaches one can take to this relationship, but in the end they boil down to two: vindication or elimination. Let’s look at these two approaches. ...

According to the vindication approach, we already know (or have good reason to believe) that the generalisations of common-sense psychology are largely true. So one of the things we can expect from scientific psychology is an explanation of how or why they are true. We know, for example, that if normal perceivers look at an object in good light, with nothing in the way, they will come to believe that the object is in front of them. So one of the aims of a scientific psychology of vision and cognition is to explain why this humble truth is in fact true: what is it about us, about our brains and our eyes, and about light that makes it possible for us to see objects, and to form beliefs about them on the basis of seeing them. The vindication approach might use an analogy with common-sense ...

physics. Before Newton, people already knew that if an object is thrown into the air, it eventually returns to the ground. But it took Newton’s physics to explain why this truth is, in fact, true. And this is how things will be with common-sense psychology.26 ...

By contrast, the elimination approach says that there are many reasons for doubting whether common-sense psychology is true. And if it is not true then we should allow the science of the mind or the brain to develop without having to employ the categories of common-sense psychology. Scientific psychology has no obligation to explain why the common-sense generalisations are true, because there are good reasons for thinking they aren’t true! So we should expect scientific psychology eventually to eliminate common- sense, rather than to vindicate it. This approach uses an analogy with discredited theories such as alchemy. Alchemists thought that there was a ‘philosopher’s stone’ which could turn lead into gold. But science did not show why this was true it wasn’t true, and alchemy was eventually eliminated. And this is how things will be with common-sense psychology.27 ...

Since proponents of the elimination approach are always materi- alists, the approach is known as eliminative materialism. According to one of its leading defenders, Paul Churchland: ...

[E]liminative materialism is the thesis that our common-sense concep- tion of psychological phenomena constitutes a radically false theory, a theory so fundamentally defective that both the principles and the ontology of the theory will eventually be displaced . . . by completed neuroscience. ...

By ‘the ontology of the theory’, Churchland means those things which the theory claims to exist: beliefs, desires, intentions and so on. (‘Ontology’ is the study of being, or what exists.) So to say that the ontology of common-sense psychology is defective is to say that common-sense psychology is wrong about what is in the mind. In fact, eliminative materialists normally claim that none of the mental states that common-sense psychology postulates exists. That is, there are no beliefs, desires, intentions, memories, hopes, fears and so on. ...

This might strike you as an incredible view. How could any reasonable person think that there are no thoughts? Isn’t that as self-refuting as saying that there are no words? But, before assessing the view, notice how smoothly it seems to flow from the conception of common-sense psychology as a theory, and of mental states as theoretical entities, mentioned in the previous section. Remember that, on this conception, the entire nature of thoughts is described by the theory. The answer to the question ‘What are thoughts?’ is: ‘Thoughts are what the theory of thoughts says they are’. So, if the theory of thoughts turns out to be false, then there is nothing for thoughts to be. That is, either the theory is largely true, or there are no thoughts at all. (Compare: atoms are what the theory of atoms says they are. There is nothing more to being an atom than what the theory says; so if the theory is false, there are no atoms.) ...

Eliminative materialists adopt the view that common-sense psychology is a theory, and then argue that the theory is false.28 But why do they think the theory is false? One reason they give is that (contrary to the vindication approach) common-sense psychology does not in fact explain very much: ...

[T]he nature and dynamics of mental illness, the faculty of creative imagination . . . the nature and psychological function of sleep . . . the rich variety of perceptual illusions . . . the miracle of memory . . . the nature of the learning process itself . . . 29 ...

all of these phenomena, according to Churchland, are ‘wholly mysterious’ to common-sense psychology, and will probably remain so. A second reason for rejecting common-sense psychology is that it is ‘stagnant’ it has shown little sign of development throughout its long history (whose length Churchland rather arbitrarily gives as twenty-five centuries30). A third reason is that there seems little chance that the categories of common-sense psychology (belief, desire and so on) will ‘reduce’ to physical categories, i.e. it seems very unlikely that scientists will be able to say in a detailed and systematic way which physical phenomena underpin beliefs and desires. (Remember the absurdity of ‘mother love is magnesium’.) ...

If this cannot be done, Churchland argues, there is little chance of making common-sense psychology scientifically respectable.
丘奇兰认为,如果这做不到,那么使常识心理学在科学上受人尊敬的机会就很小。

Before assessing these reasons, we must return to the question that is probably still worrying you: how can anyone really believe this theory? How can anyone believe that there are no beliefs? Indeed, how can anyone even assert the theory? For to assert something is to express a belief in it; but, if eliminative materialism is right, then there are no beliefs, so no-one can express them. So aren’t eliminative materialists, by their own lights, just sounding off, vibrating the airwaves with meaningless sounds? Doesn’t their theory refute itself?
评估这些原因之前, 我们必须回到 可能仍然困扰你的问题:怎么会有人真的相信这个理论呢?怎么会有人相信没有信仰呢?事实上, 怎么会有人 断言这个理论呢? 因为 断言某事就是表达它的信念;但是,如果消除唯物主义是对的,那么就没有信仰,所以没有人可以表达它们。那么消除唯物主义者不正是他们自己的灯光,发出无意义的声音来震动电波吗?他们的理论不是在反驳自己吗?

Churchland has responded to this argument by drawing an analogy with the nineteenth-century belief in vitalism – the thesis that it is not possible to explain the difference between living and non-living things in wholly physicochemical terms, but only by ap- pealing to the presence of a vital spirit or ‘entelechy’ which explains the presence of life. He imagines someone arguing that the denial of vitalism (antivitalism) is self-refuting:
丘奇兰 通过 类比 19 世纪的生命力论信仰来回应这一点——论点认为不可能完全用物理化学术语来解释生物和非生物之间的区别只能通过应用生命精神的存在“entelechy”解释了生命的存在想象有人争论,对生命力论(antivitalism)的否认是自我反驳的:

My learned friend has stated that there is no such things as vital spirit. But this statement is incoherent. For if it is true, then my friend does not have vital spirit, and therefore must be dead. But if he is dead, then his statement is just a string of noises, devoid of meaning or truth. Evidently, the assumption that antivitalism is true entails that it cannot be true! QED31
我博学的朋友说过,没有生命力的精神这样的东西。但这种说法是不连贯的。因为如果这是真的,那么我的朋友 就没有生命力因此一定死了。但如果死了那么他的陈述就只是连串的声音,没有意义真理。显然,反生命论是真的假设它不可能是真的!QED31

The argument being parodied is this: the vitalists held that it was in the nature of being alive that one’s body contained vital entelechy, so anyone who denies the existence of vital entelechies claims in effect that nothing is alive (including they themselves). This is a bad argument. Churchland claims that the self-refutation charge against eliminative materialism involves an equally bad argument: what it is to assert something, according to common-sense psychology, is to express a belief in it; so anyone who denies the existence of beliefs in effect claims that no-one asserts anything (including the eliminative materialists).
模仿论点 是这样的:生命论者认为在活着的本质中,一个人的身体包含着生命的 entelechy,所以任何否认生命 entelechis 存在的人实际上都声称什么都没有alive(包括他们自己)。这是一个糟糕的论点。丘奇兰声称对消除唯物主义自我反驳指控涉及一个同样糟糕的论点:根据 常识心理学 断言某物是什么是表达对它的信仰;因此,任何实际上否认信念存在的人都声称没有人断言任何东西(包括排除唯物主义者)。

Certainly, the argument in favour of vitalism is a bad one. But
当然,支持活力主义论点糟糕的

the analogy is not very persuasive. For, whereas we can easily make sense of the idea that life might not involve vital entelechy, it’s very hard to make sense of the analogous idea that assertion might not involve the expression of belief. Assertion itself is a notion from common-sense psychology: to assert something is to claim that it is true. In this sense, assertion is close to the idea of belief: to believe something is to hold it as true. So if common-sense psychology is eliminated, assertion as well as belief must go.32
这个类比不是有说服力。因为,虽然我们很容易理解生命可能不涉及生命的意义很难理解断言可能不涉及信仰表达的类似想法 断言本身来自常识心理学的一个概念某事就是声称它是真实的。从这个意义上说,断言接近于信念的概念:相信某事就是认为它是真实的。因此,如果常识性心理学被消除,断言和信念就必须消失。32

Churchland may respond that we should not let the future devel- opment of science be dictated by what we can or cannot imagine or make sense of. If in the nineteenth century there were people who could not make sense of the idea that life did not consist of vital ‘entelechy’, these people were victims of the limitations of their own imaginations. But, of course, though it is a good idea to be aware of our own cognitive limits, such caution by itself does not get us anywhere near the eliminative position. ...

But we do not need to settle this issue about self-refutation in order to assess eliminative materialism. For, when examined, the positive arguments in support of the view are not very persuasive anyway. I shall briefly review them. ...

First, take the idea that common-sense psychology hasn’t explained much. On the face of it, the fact that the theory which explains behaviour in terms of beliefs and desires does not also explain why we sleep (and the other things mentioned above) is not in itself a reason for rejecting beliefs and desires. For why should the theory of beliefs and desires have to explain sleep? This response seems to demand too much of the vindication view. ...

Second, let’s consider the charge that common-sense psychology is ‘stagnant’. This is highly questionable. One striking example of how the common-sense theory of mind seems to have changed is in the place it assigns to consciousness (see Chapter 1). It is widely accepted that, since Freud, many people in the West accept that it makes sense to suppose that some mental states (for example, desires) are not conscious. This is a change in the view of the mind that can plausibly be regarded as part of common-sense. ...

In any case, even if common-sense psychology had not changed ...

very much over the centuries, this would not in itself establish much. The fact that a theory has not changed for many years could be a sign either of the theory’s stagnation or of the fact that it is extremely well established. Which of these is the case depends on how good the theory is in explaining the phenomena, not on the absence of change as such. (Compare: the common-sense physical belief that unsupported bodies fall to the ground has not changed for many centuries. Should we conclude that this common-sense belief is stagnant?) ...

Third, there is the issue of whether the folk psychological catego- ries can be reduced to physical (or neurophysiological) categories. The assumption here is that, in order for a theory to be scientifically respectable, it has to be reducible to physics. This is a very extreme assumption, and, as I suggested in the introduction, it does not have to be accepted in order to accept the idea that the mind can be explained by science. If this is right, the vindication approach can reject reductionism without rejecting the scientific explanation of the mind.33 ...

So, even if they are not ultimately self-refuting, the arguments for eliminative materialism are not very convincing. The specific reasons eliminative materialists offer in defence of the theory are very controversial. Nonetheless, many philosophers of mind are disturbed by the mere possibility of eliminative materialism. The reason is that this possibility (however remote) is one which is implicit in the Theory Theory. For if common-sense psychology really is an empirical theory – that is, a theory which claims to be true of the ordinary world of experience – then, like any empirical theory, its proponents must accept the possibility that it may one day be falsified. No matter how much we believe in the theories of evolution or relativity, we must accept (at least) the possibility that one day they may be shown to be false. ...

One way to avoid this unhappy situation is to reject the Theory Theory altogether as an account of our ordinary understanding of other minds. This approach would give a negative answer to the first question posed at the end of the last section ’Does the Theory Theory give an adequate account of common-sense psychology?’. Let’s take a brief look at this approach. ...

Theory versus simulation ...

So there are many philosophers who think that the Theory Theory utterly misrepresents what we do when we apply psychological concepts to understand each others’ minds. Their alternative is rather that understanding others’ minds involves a kind of imagina- tive projection into their minds. This projection they call variously ‘replication’ or ‘simulation’. ...

The essence of the idea is easy to grasp. When we try and figure out what someone else is doing, we often put ourselves ‘in their shoes’, trying to see things from their perspective. That is, we im- aginatively ‘simulate’ or ‘replicate’ the thoughts that might explain their behaviour. In reflecting on the actions of another, according to Jane Heal: ...

[W]hat I endeavour to do it to replicate or recreate his thinking. I place myself in what I take to be his initial state by imagining the world as it would appear from his point of view and then deliberate, reason and reflect to see what decision emerges.34 ...

A similar view was expressed over forty years ago by W.V. Quine: ...

[P]ropositional attitudes . . . can be thoughts of as involving something like quotation of one’s imagined verbal response to an imagined situ- ation. Casting our real selves thus in unreal roles, we do not generally know how much reality to hold constant. Quandaries arise. But despite them we find ourselves attributing beliefs, wishes and strivings even to creatures lacking the power of speech, such is our dramatic virtuosity. We project ourselves even into what from his behaviour we imagine a mouse’s state of mind to have been, and dramatize it as a belief, wish or striving, verbalized as seems relevant and natural to us in the state thus feigned.35 ...

Recent thinkers have begun to take Quine’s observation very seriously, and there are a number of options emerging on how to fill out the details. But common to them all is the idea that figuring out what someone thinks is not looking at their behaviour and applying a theory to it. Rather, it is something more like a skill we have: the ...

skill to imagine ourselves into the minds of others, and to predict and explain their behaviour as a result.
自己想象成他人的思想 预测和解释他们行为的技能

It is easy to see how this ‘simulation theory’ of common-sense psychology can avoid the issue of the elimination of the mind. The eliminative materialist argument in the last section started with the assumptions that common-sense psychology was a theory, that the things it talks about are fully defined by the theory, and that it is competing with scientific psychology. The argument then said that common-sense psychology is not a very good theory and concluded that there are no good reasons for thinking that mental states exist. But if common-sense psychology is not a theory at all then it is not even in competition with science, and the argument doesn’t get off the ground.
很容易 看出 这种常识性心理学“模拟理论”如何避免消除心灵的问题。上一节中的排除唯物主义论证始于这样一个假设,即常识心理学是一种理论,所谈论事物完全理论定义并且它与科学心理学竞争 然后,该论点 常识性心理学是一个很好的理论——并得出结论,认为精神状态存在没有充分的理由。但是,如果常识心理学根本不是一种理论 ,那么它甚至不与科学竞争,而且这个论点也没有开始。

Although adopting the simulation theory would be a way of denying a premise – the Theory Theory – in one of the arguments for eliminative materialism, this is not a very good reason in itself for believing in the simulation theory. For, looked at in another way, the simulation theory could be quite congenial to eliminative materialists: it could be argued that, if common-sense psychology does not even present itself as a science, or as a ‘proto-science’, then we do not need to think of it as true at all. So one could embrace the simulation theory without believing that minds really exist. (The assumption here, of course, is that the only claims that tell us what there is in the world are the claims made by scientific theories.)
尽管采用 模拟理论 是一种 否认消除唯物主义论点之一的前提——理论理论——的方式,但这本身并不是相信 模拟理论的一个很好的理由。 因为, 另一个角度来看,模拟理论可能相当适合消元论者:可以 如果常识心理学甚至 自己表现为门科学,或者一门“原始科学”,那么我们根本不需要认为它是真的。因此,人们可以接受模拟理论而不相信思想真的存在。(当然,这里的假设是,唯一告诉我们世界上有什么的主张是科学理论的主张。

This combination of simulation theory and eliminative material- ism is actually held by Quine. Contrast the remark quoted earlier with the following:
这种模拟理论消除唯物主义结合实际上是奎因所持有 的。将前面引用的 评论与以下内容 进行对比

The issue is . . . whether in an ideal last accounting of everything . . . it is efficacious so to frame our conceptual scheme as to mark out a range of entities or units of a so-called mental kind in addition to the physical ones. My hypothesis, put forward in the spirit of a hypothesis of natural science, is that it is not efficacious.36
问题是。无论在理想情况下,对一切事物的最后解释构建我们的概念图式以便在物理实体之外,标出一系列所谓的心理类型的实体或单位有效的。我的假设是,本着自然科学假说的精神提出的,是它没有效果。36

Since eliminative materialism and the simulation theory are compat- ible in this way, avoiding eliminative materialism would be a very bad motivation on its own for believing in the simulation theory.
既然消除唯物主义模拟理论以这种方式是兼容的,那么避免消除唯物主义本身就是相信模拟理论的一个非常糟糕的动机

And, of course, simulation theorists have a number of independent reasons for believing in their theory. One reason has already been mentioned in this chapter (in the section ‘Common-sense psychol- ogy’): no-one has been able to come up with very many powerful or interesting common-sense psychological generalisations. Remember Adam Morton’s remark that most of the generalisations of folk psychology are ‘dull truisms’. This is not intended as a knock-down argument, but (simulation theorists say) it should encourage us to look for an alternative to the Theory Theory. ...

So what should we make of the simulation theory? Certainly, many of us will recognise that this is often how things seem to us when we understand one another. ‘Seeing things from someone else’s point of view’ can even be practically synonymous with understanding them, and failure to see things from others’ points of view is clearly failure in one’s ability as a common-sense psycholo- gist. But if simulation is such an obvious part of our waking lives, why should anyone deny that it takes place? And if no-one (even a Theory Theorist) should deny that it takes place, how is the simula- tion theory supposed to be in confiict with the Theory Theory? Why couldn’t a Theory Theorist respond by saying: ‘I agree: that’s how understanding other minds seems to us; but you couldn’t simulate unless you had knowledge of some underlying theory whose truth made the simulation possible. This underlying theory need not be applied consciously; but as we all know, this doesn’t mean it isn’t there’.
那么 我们应该 如何看待 模拟理论呢? 当然,我们中的许多人认识到当我们相互理解时,事情往往就是我们看起来的样子。“从别人的角度看事物 甚至可以同于理解事物,而不能别人的角度事物显然是一个人能力的失败作为一个常识性的心理学家。但是,如果模拟是我们清醒生活中如此明显的一部分,为什么有人要否认它的存在呢?如果没有人(即使是理论理论家)否认发生了那么模拟理论又该如何理论理论呢?为什么理论理论家不能回答说:'我同意:这就是我们理解他人思想的方式 ;但是除非你了解一些基本理论,否则你无法模拟,这些理论的真理使 模拟成为可能。 这个基本理论不需要 有意识地应用;但众所周知,这并不意味着它不存在

The answer depends on what we mean by saying that common- sense psychology is a theory that is ‘applied’ to thinkers. In the section on ‘Common-sense psychology’ above, I pointed out that the Theory Theory could say that common-sense psychological gener- alisations were unconsciously known by thinkers (an idea we will return to in Chapter 4). But, on the face of it, it looks as if this view is not directly threatened by the simulation theory. Since simulation relates to what we are explicitly aware of in acts of interpretation, the fact that we simulate others does not show that we do not have tacit knowledge of common-sense psychological generalisations. Simulation theorists therefore need to provide independent argu- ments against this view.
答案取决于我们所说的常识心理学 是一种 应用”到思想家理论是什么意思。 在上面“常识心理学”一节指出理论理论可以说常识性心理学泛思想家 无意识地知道的(我们将在本章中返回这一想法4).但是,表面上这种观点似乎并没有受到模拟理论直接威胁由于模拟与我们在解释行为中明确意识到的东西有关,因此我们模拟他人的事实并不表明我们对 常识性的心理学概括没有隐性知识 因此模拟理论家需要提供独立的论据来反对这种观点。

It is important not to rush to any hasty conclusions. It is still relatively early days for the simulation theory, and many of the details have not been worked out yet. However, it does seem that the Theory Theory can defend itself if it is allowed to appeal to the idea of tacit knowledge; and the Theory Theory can, it seems, accept the main insight of the simulation theory, that we often interpret others by thinking of things from their point of view etc. In this way, it might be possible to hold the best elements of both approaches to understanding other minds. Maybe there is no real dispute here, only a difference of emphasis.
重要的是 不要得出任何草率的结论。 模拟 理论仍处于相对早期的阶段许多细节尚未 确定 然而,如果允许理论理论诉诸隐性知识的概念,它似乎确实可以为自己辩护; 而理论理论似乎可以接受模拟理论的主要见解我们经常通过他人的角度思考事物解释他人,等等。 通过这种方式有可能持有两种方法的最佳元素来理解其他思想。也许这里没有真正的争议,只是重点的不同。

Conclusion: from representation to computation
结论:表示计算

So how do we know about the mind? I’ve considered and endorsed an answer: by applying conjectures about people’s minds – or ap- plying a theory of the mind to explain their behaviour. Examining the theory then helps us then to answer the other question what do we know about the mind? This question can be answered by finding out what the theory says about minds. As I interpret common-sense psychology, it says (at least) that thoughts are states of mind which represent the world and which have effects in the world. That’s how we get from an answer to the ‘How?’ question to an answer to the ‘What?’ question.
那么我们如何了解心呢?考虑赞同一个答案:通过应用关于人们思想的猜想——或应用一种心理理论——解释他们的行为。然后,检查理论有助于我们回答另一个问题——我们对心灵了解多少这个问题可以通过找出该理论心智的看法回答作为识心理学的解释,至少)思想代表世界世界产生影响心理状态这就是我们如何从“如何”问题的答案变成“什么”问题的答案。

There are various ways in which an enquiry could go from here. The idea of a state which represents the world, and causes its posses- sor to behave in a certain way, is not an idea that is applicable only to human beings. Since our knowledge of thoughts is derived from behaviour and not necessarily verbal behaviour it is possible to apply the basic elements of common-sense psychology to other animals too. ...

How far down the evolutionary scale does this sort of explana- tion go? To what sorts of animals can we apply this explanation? Consider this striking passage from C.R. Gallistel: ...

On the featureless Tunisian desert, a long-legged, fast-moving ant leaves the protection of the humid nest on a foraging expedition. It ...

moves across the desert in tortuous loops, running first this way, then that, but gradually progressing ever farther away from the life-sustain- ing humidity of the nest. Finally it finds the carcass of a scorpion, uses its strong pincers to gouge out a chunk nearly its own size, then turns to orient within one or two degrees of the straight line between itself and the nest entrance, a 1-millimetre-wide hole, 40 metres distant. It runs a straight line for 43 metres, holding its course by maintaining its angle to the sun. Three metres past the point at which it should have located the entrance, the ant abruptly breaks into a search pattern by which it eventually locates it. A witness to this homeward journey finds it hard to resist the inference that the ant on its search for food possessed at each moment a representation of its position relative to the entrance of the nest, a spatial representation that enabled it to compute the solar angle and the distance of the homeward journey from wherever it happened to encounter food.37 ...

Here the ant’s behaviour is explained in terms of representations of locations in its environment. Something else is added, however: Gallistel talks about the ant ‘computing’ the solar angle and the distance of the return journey. How can we make sense of an ant ‘computing’ representations? Why is this conclusion ‘hard to resist’? For that matter, what does it mean to compute representations at all? It turns out, of course, that what Gallistel thinks is true of the ant, many people think is true of our minds that as we move around and think about the world, we compute representations. This is the topic of the next chapter.
在这里蚂蚁的行为 是根据其环境中位置的表示来解释的。然而,还添加了其他内容:Gallistel 到了蚂蚁“计算” 太阳角度回程的距离 我们如何 理解 ant 的 'computing' 表示?为什么这个结论是“难以抗拒的”? 而言, 计算表示到底意味着什么?当然,事实证明,Gallistel 认为蚂蚁是正确的,许多人 认为 我们的思想也是正确的—— 当我们 四处移动思考世界时,我们会计算表征。这是下一章的主题。

Further reading
延伸阅读

Jaegwon Kim’s The Philosophy of Mind (Boulder, Col.: Westview 1996) is one of the best general introductions to the philosophy of mind; also good is David Braddon-Mitchell and Frank Jackson, Philosophy of Mind and Cognition (Oxford: Blackwell 1996). William Lyons’ Matters of the Mind (Edinburgh: Edinburgh University Press 2001) is readable and accessible, with a novel approach to some issues. Behaviourism is adequately repre- sented by Part 1 of W.G. Lycan (ed.), Mind and Cognition (Oxford: Blackwell 1990; second edition 1998); the whole anthology also contains essential
Jaegwon Kim 的 The Philosophy of Mind(博尔德,科罗拉多州:Westview 1996 年)哲学最好的一般性介绍之一;大卫·布拉登-米切尔(David Braddon-Mitchell)和弗兰克·杰克逊(Frank Jackson)的心智与认知哲学》(Philosophy of Mind and Cognition)(牛津:布莱克威尔,1996年)也很好。威廉·莱昂斯 (William Lyons) 的《心灵问题》(爱丁堡:爱丁堡大学出版社,2001 年)可读且易于理解,对某些问题采取了新颖的方法。行为主义W.G. 的第 1部分得到了充分的体现。Lycan(编辑),Mind and Cognition (Oxford:Blackwell 1990;1998 年第二);本选集包含Essential

readings on eliminative materialism and common-sense or ‘folk’ psychol- ogy. For the idea that mental states are causes of behaviour, see Donald Davidson’s essays collected in his Essays on Actions and Events (Oxford: Oxford University Press 1980); Davidson also combines this idea with a denial of psychological laws (in ‘Mental events’ and ‘The material mind’). For the causal theory of mind, D.M. Armstrong’s classic A Materialist Theory of the Mind (London: Routledge 1968; reprinted 1993) is well worth reading. Daniel C. Dennett has developed a distinctive position on the rela- tions between science and folk psychology and between representation and causation: see the essays in The Intentional Stance (Cambridge, Mass.: MIT Press 1987), especially ‘True believers’ and ‘Three kinds of intentional psy- chology’. An interesting version of the ‘simulation’ alternative to the Theory Theory is Jane Heal, ‘Replication and functionalism’ in J. Butterfield (ed.) Language, Mind and Logic (Cambridge: Cambridge University Press 1986). The simulation/Theory Theory debate is well represented in the two volumes edited by Martin Davies and Tony Stone: Folk Psychology: The Theory of Mind Debate and Mental Simulation: Evaluations and Applications (both Oxford: Blackwell 1995). ...

3 ...

Computers and thought ...

So far, I have tried to explain the philosophical problem of the nature of representation, and how it is linked with our understand- ing of other minds. What people say and do is caused by what they think what they believe, hope, wish, desire and so on that is, by their representational states of mind or thoughts. What people do is caused by the ways they represent the world to be. If we are going to explain thought, then we have to explain how there can be states which can at the same time be representations of the world and causes of behaviour. ...

To understand how anything can have these two features it is useful to introduce the idea of the mind as a computer. Many psychologists and philosophers think that the mind is a kind of computer. There are many reasons why they think this, but the link with our present theme is this: a computer is a causal mechanism which contains representations. In this chapter and the next I shall explain this idea, and show its bearing on the problems surrounding thought and representation. ...

The very idea that the mind is a computer, or that computers might think, inspires strong feelings. Some people find it exciting, others find it preposterous, or even degrading to human nature. I will try and address this controversial issue in as fair-minded a way as possible, by assessing some of the main arguments for and against the claims that computers can think, and that the mind is a computer. But first we need to understand these claims. ...

Asking the right questions ...

It is crucial to begin by asking the right questions. For example, sometimes the question is posed as: can the human mind be mod- elled on a computer? But, even if the answer to this question is ...

83

Computers and thought

‘Yes’, how could that show that the mind is a computer? The British Treasury produces computer models of the economy – but no-one thinks that this shows that the economy is a computer. This chapter will explain how this confusion can arise. One of this chapter’s main aims is to distinguish between two questions: ...

Can a computer think? Or, more precisely, can anything think simply by being a computer?
计算机能思考吗?或者,更准确地说,任何事物都可以简单地通过成为计算机来思考吗?

Is the human mind a computer? Or, more precisely, are any actual mental states and processes computational?
人的思想计算机吗? 或者,准确地说,任何实际的心理状态和过程都是计算的吗?

This chapter will be concerned mainly with question 1, and Chapter 4 with question 2. The distinction between the two questions may not be clear yet, but, by the end of the chapter, it should be. To un- derstand these two questions, we need to know at least two things: first, what a computer is; and, second, what it is about the mind that leads people to think that a computer could have a mind, or that the human mind could be a computer.
将主要关注问题1,第 4 章将主要关注问题 2。这两个问题之间的区别可能还不清楚,但是,到本章结束时,它应该已经很清楚了。要回答这两个问题,我们至少需要了解两件事: 第一,什么是计算机;其次,是什么关于思想导致人们认为计算机可以思想,或者人类的思想可以是计算机。

What is a computer? We are all familiar with computers – many of us use them every day. To many they are a mystery, and explaining how they work might seem a very difficult task. However, though the details of modern computers are amazingly complex, the basic concepts behind them are actually beautifully simple. The difficulty in understanding computers is not so much in grasping the concepts involved, but in seeing why these concepts are so useful.
什么是计算机?我们都熟悉计算机 – 我们中的许多人 每天都在使用它们许多人来说它们是一个谜,解释它们是如何工作的似乎是一项非常困难的任务。然而,尽管现代计算机的细节非常复杂,但它们背后的基本概念实际上非常简单。理解计算机的在于掌握所涉及的概念,而在于看到为什么这些概念如此有用。

If you are familiar with the basic concepts of computers, you may wish to skip the next five sections, and move directly to the section of this chapter called ‘Thinking computers?’ on p. 109. If you are not familiar with these concepts, then some of the terminology that follows may be a little daunting. You may want to read through the next few sections quite quickly, and the point of them will become clearer after you have then read the rest of this chapter and Chapter 4.
如果您熟悉计算机的基本概念您可能希望跳过接下来五个部分,直接进入的“思考计算机”的部分。 p 上。 109. 如果您不熟悉这些概念,那么下面的一些术语可能 有点 令人生畏。 您可能希望快速 阅读接下来的几在您阅读的其余部分它们的要点变得更加清晰和第 4 章。

To prepare yourself for understanding computers, it’s best to abandon most of the presuppositions that you may have about them. The personal computers we use in our everyday lives normally
为了自己理解计算机做好准备, 最好放弃 您可能 它们的大部分假设我们日常生活通常使用的个人电脑

100

have a typewriter-style keyboard and a screen. Computers are usu- ally made out a combination of metal and plastic, and most of us know that they have things inside them called ‘silicon chips’, which somehow make them work. Put all these ideas to one side for the moment – none of these features of computers is essential to them. It’s not even essential to computers that they are electronic.
有一个打字机式的键盘和一个屏幕。计算机通常是由金属和塑料制成的,我们大多数人都知道它们内部称为“硅芯片”的东西,这些东西某种方式使它们工作。暂时 所有这些想法放在一边 ——计算机的这些功能对他们来说都不是必需的。对于计算机来说,它们是否是电子的甚至不是必需的。


那么,计算机必须具备什么条件呢?我最终得出的粗略定义是:计算机是一种系统方式处理表征的设备,在我们更准确地理解“过程”、“表征”和“系统”之前,这有点模糊。为了理解这些想法,我们还需要了解两个进一步的想法。第一个是计算的相当抽象的数学概念。第二个是计算如何自动化应该依次采用这些想法


计算、函数算法


我们需要的第一个想法数学函数的想法我们都从初等算术中熟悉这个想法。我们在学校首先学习的一些东西是基本的算术函数:加法、减法、乘法除法。然后我们通常会学习其他函数,例如平方函数(通过该函数,我们将数字 x 乘以自身来生成数字的平方 x )、对数等。


正如我们 学校到的那样,算术函数不是数字,而是对数字 “完成 ”的东西。我们在基本算术 学习的是 一些数字对其应用某些函数 两个数字 7 5 相加 实际上,我们将这两个数字作为法函数“输入”,并得到另一个数字 12 作为“输出”。这个加法和我们7+5=12 来表示当然,我们可以任意两个数字放在75 占据的位置输入位置),加法函数将确定一个唯一的数字作为输出。需要训练才能弄清楚任何数字的输出是什么——但关键是,根据加法函数,只有一个数字是任何给定输入数字组的函数输出。


如果我们计算 7 + 5 = 12,并从中去除数字7、512我们会得到一个包含三个“间隙”数符号


_ + _ = _。在前两个间隙中,我们将输入写入加法函数, 第三个间隙中,我们写入 输出。 然后 函数本身可以表示 _ + _,两个 空格表示应输入数字的位置。这些空格标准地用斜体字母表示,xyz 等 - 因此函数应该 x+y这些字母,称为“可变变量”标记函数的不同间隙位置


现在来看一些术语。函数输入称为函数的参数,输出称为 函数的值。等式 x + y = z 中的参数是数字对 x y使得 z 是它们的值。也就是说,加法函数的值是该函数的参数之和。 法函数的值 是一个数字与另一个数字(参数)相减的结果。等等。

Though the mathematical theory of functions is very complex in its details, the basic idea of a function can be explained using simple examples such as addition. And, though I introduced it with a mathematical example, the notion of a function is extremely general and can be extended to things other than numbers. For ex- ample, because everyone has only one natural father, we can think of the expression ‘the natural father of x’ as describing a function, which takes people as its arguments and gives you their fathers as values. (Those familiar with elementary logic will also know that expressions such as ‘and’ and ‘or’ are known as truth-functions, e.g. the complex proposition PnQ involves a function that yields the value True when both its arguments are true, and the value False otherwise.) ...

The idea of a function, then, is a very general one, and one that we implicitly rely on in our everyday life (every time we add up the prices of something in a supermarket, for example). But it is one thing to say what a function is, in the abstract, and another to say how we use them. To know how to employ a function, we need a method for getting the value of the function for a given argument ...

or arguments. Remember what happens when you learn elementary arithmetic. Suppose you want to calculate the product of two num- bers, 127 and 21. The standard way of calculating this is the method of long multiplication: ...

127 ...

× 21 ...

127 ...

+ 2540 ...

2667 ...

What you are doing when you perform long multiplication is so obvious that it would be banal to spell it out. But, in fact, what you know when you know how to do this is something incredibly pow- erful. What you have is a method for calculating the product of any two numbers that is, of calculating the value of the multiplication function for any two arguments. This method is entirely general: it does not apply to some numbers and not to others. And it is entirely unambiguous: if you know the method, you know at every stage what to do next to produce the answer. ...

(Compare a method like this with the methods we use for getting on with people we have met for the first time. We have certain rough-and-ready rules we apply: perhaps we introduce ourselves, smile, shake hands, ask them about themselves, etc. But obviously these methods do not yield definite ‘answers’; sometimes our social niceties backfire.) ...

A method, such as long multiplication, for calculating the value of a function is known as an algorithm. Algorithms are also called ‘effective procedures’ as they are procedures which, if applied cor- rectly, are entirely effective in bringing about their results (unlike the procedures we use for getting on with people). They are also called ‘mechanical procedures’, but I would rather not use this term, as in this book I am using the term ‘mechanical’ in a less precise sense. ...

It is very important to distinguish between algorithms and func- tions. An algorithm is a method for finding the value of a function. ...

A function may have more than one algorithm for finding its values for any given arguments. For example, we multiplied 127 by 21 by using the method of long multiplication. But we could have multi- plied it by adding 127 to itself 20 times. That is, we could have used a different algorithm. ...

To say that there is an algorithm for a certain arithmetical func- tion is not to say that an application of the algorithm will always give you a number as an answer. For example, you may want to see whether a certain number divides exactly into another number with- out remainder. When you apply your algorithm for division, you may find out that it doesn’t. So, the point is not that the algorithm gives you a number as an answer, but that it always gives you a procedure for finding out whether there is an answer. ...

When there is an algorithm that gives the value of a function for any argument, then mathematicians say that the function is computable. The mathematical theory of computation is, in its most general terms, the theory of computable functions, i.e. functions for which there are algorithms. ...

Like the notion of a function, the notion of an algorithm is extremely general. Any effective procedure for finding the solution to a problem can be called an algorithm, so long as it satisfies the following conditions: ...

At each stage of the procedure, there is a definite thing to do next. Moving from step to step does not require any special guesswork, insight or inspiration. ...

The procedure can be specified in a finite number of steps. ...

So we can think of an algorithm as a rule, or a bunch of rules, for giving the solution to a given problem. These rules can then be represented as a ‘flow chart’. Consider, for example, a very simple algorithm for multiplying two whole numbers, x and y, which works by adding y to itself. It will help if you imagine the procedure being performed on three pieces of paper, one for the first number (call this piece of paper X), one for the second number (call this piece of paper Y) and one for the answer (call this piece of paper the ANSWER). ...


3.1显示了流程图;通过以下一系列步骤表示计算


步骤 (i):在 ANSWER 上写下“0”,然后转到步骤 (ii)。步骤 (ii):X上写的数字是否= 0?


如果是,则转到步骤 (v) 如果否,转到步骤 (iii)

Step (iii): Subtract 1 from the number written on X, write the result on X, and go to step (iv) ...

Step (iv): Add the number written on Y to the ANSWER, and go to step (ii) ...

Step (v): STOP ...

Let’s apply this to a particular calculation, say 4 times 5. (If you are familiar with this sort of procedure, you can skip this example and move on to the next paragraph.) ...

Begin by writing the numbers to be multiplied, 4 and 5, on the X and Y pieces of paper respectively. Apply step (i) and write 0 on the ANSWER. Then apply step (ii) and ask whether the number written on X is 0. It isn’t it’s 4. So move to step (iii), and subtract 1 from the number written on X. This leaves you with 3, so you ...

(i) ...

(v) ...

(iv) ...


(三)

Figure 3.1 Flow chart for the multiplication algorithm. ...

should write this down on X, and move to step (iv). Add the number written on Y (i.e. 5) to the ANSWER, which makes the ANSWER read 5. Move to step (ii), and ask again whether the number on X is ...

0. It isn’t – it’s 3. So move to step (iii), subtract 1 from the number written on X, write down 2 on X and move to step (iv). Add the number written on Y to the ANSWER, which makes the ANSWER read 10. Ask again whether the number written on X is 0. It isn’t – it’s 2. So move to step (iii), subtract 1 from the number written on X, write down 1 on X and move to step (iv). Add the number written on Y to the ANSWER, which makes the ANSWER read 15. Ask again whether the number written on X is 0; it isn’t, it’s 1. So move to step (iii), subtract 1 from the number written on X, write down 0 on X and move to step (iv). Add the number written on Y to the ANSWER, which makes the ANSWER read 20. Move to step ...


(ii) 并询问写在 X 上的数字是否为 0。这次是这样,因此移至步骤 (v),并停止该过程。写在 ANSWER 上的数字是 20,这是 4 乘以 5 的结果。
1


这是一种 4以 5 的非常费力的方法 这个例子的重点并不是说这对我们来说是一个很好的 程序。关键是它是一个完全有效的程序:在每个阶段, 下一步做什么都是完全清楚并且该程序有限数量的步骤中终止 数可能非常大;但对于任何一对有限数,这仍然是有限步长数。


该示例的步骤 (iii) 和 (iv) 说明了算法的一个重要特征应用算法进行乘法时,我们采用了其他算术运算: (iii) 步中的减法第 (iv) 步中的加法。这样做没有错,只要也有用于减法和加法运算的算法—— 当然 事实上,大多数算法某个阶段使用其他算法 想想 乘法:它使用加法 “short” 乘法的结果相加。因此, 进行长乘法,您将使用一些算法进行加法。因此,我们费力的乘法算法可以 分解依赖于其他(也许更简单)算法的步骤和简单的“动作”正如我们看到的,这个想法对于理解计算机非常重要


算法可以用流程图表示的事实表明了算法概念普遍性正如我们可以为各种过程编写流程一样,我们也可以为各种事情编写算法。例如,某些配方可以表示为流程图。考虑这个煮鸡蛋的算法。


打开炉子


装满


平底锅放在炉子


沸腾时,加入一个鸡蛋,设置计时器


计时器响起时,关闭燃气


鸡蛋中取出


结果:一个鸡蛋。

This is a process that can be completed in a finite number of steps, and at each step there is a definite, unambiguous, thing to do next. No inspiration or guesswork is required. So, in a sense, boiling an egg can be described as an algorithmic procedure (see Figure 3.2). ...

Turn on the stove

Figure 3.2 A flow chart for boiling an egg. ...

Turing machines ...

The use of algorithms to compute the values of functions is at least as old as Ancient Greek mathematics. But it was only relatively re- cently (in fact, in the 1930s) that the idea came under scrutiny, and mathematicians tried to give a precise meaning to the concept of an algorithm. From the end of the nineteenth century, there had been intense interest in the foundations of mathematics. What makes mathematical statements true? How can mathematics be placed on a firm foundation? One question which became particularly press- ing was: what determines whether a certain method of calculation is adequate for the task in hand? We know in particular cases whether an algorithm is adequate, but is there a general method that will tell us, for any proposed method of calculation, whether or not it is an algorithm? ...


这个问题对数学具有深远的理论意义,因为算法是数学实践的核心——但如果我们不能说出它们是什么,我们就不能真正说出数学是什么。1937 年,才华横溢的英国数学家艾伦·图灵 (Alan Turing) 回答了这个问题。图灵(1912-1954 年)不仅是一位数学天才,而且可以间接地成为 20 世纪最有影响力的人物之一。正如我们将看到的,他发展了现代数字计算机的基本概念及其所有后果。但他也因在第二次世界大战期间破解纳粹的 Enigma 密码而闻名。该密码用于与当时正在摧毁英国海军的 U 型潜艇进行通信,可以说,破解密码是阻止英国在战争中战败的主要因素之一。
2


图灵生动新颖的方式回答关于计算本质 的问题实际上问道:无论多么复杂可以执行任何计算最简单的设备是什么?然后,他继续描述了这样一个装置,现在它被称为(自然而然的)图灵机”。

A Turing machine is not a machine in the ordinary sense of the word. That is, it is not a physical machine, but rather an abstract, theoretical specification of a possible machine. Though people have ...

built machines to these specifications, the point of them is not (in the first place) to be built, but to illustrate some very general proper- ties of algorithms and computations. ...

There can be many kinds of Turing machines for different kinds of computation. But they all have the following features in common: a tape divided into squares and a device that can write symbols on the tape and then read those symbols.3 The device is also in certain ‘internal states’ (more on these later), and it can move the tape to the right or to the left, one square at a time. Let us suppose for simplicity that there are only two kinds of symbol that can be written on the tape: ‘1’ and ‘0’. Each symbol occupies just one square of the tape - so the machine can only read one square at a time. (We don’t have to worry yet what these symbols ‘mean’ – just consider them as marks on the tape.) ...

So the device can only do four things: ...

It can move the tape one square at a time, from left to right or from right to left. ...

It can read a symbol on the tape. ...

It can write a symbol on the tape, either by writing onto a blank square or by overwriting another symbol. ...

It can change its ‘internal state’. ...

The possible operations of a particular machine can be repre- sented by the machine’s ‘machine table’. The machine table is, in effect, a set of instructions of the form ‘if the machine is in state X and reading symbol S, then it will perform a certain operation (e.g. writing or erasing a symbol, moving the tape) and change to state Y (or stay in the same state) and move the tape to the right/left’. If you like, you can think of the machine table as the machine’s ‘program’: it tells the machine what to do. In specifying a particular position in the machine table, we need to know two things: the current input to the machine and its current state. What the machine does is entirely fixed by these two things. ...

This will all seem pretty abstract, so let’s consider a spe- cific example of a Turing machine, one that performs a simple ...

mathematical operation, that of adding 1 to a number.4 In order to get a machine to perform a particular operation, we need to interpret the symbols on the tape, i.e. take them to represent something. Let’s suppose that our 1s on the tape represent numbers: 1 represents the number 1, obviously enough. But we need ways of representing numbers other than 1, so let’s use a simple method: rather as a prisoner might represent the days of his imprisonment by rows of scratches on the wall, a line or ‘string’ of n 1s represents the number ...

n. So, 111 represents 3, 11111 represents 5, and so on. ...

To enable two or more numbers to be written on a tape, we can separate numbers by using one or more 0s. The 0s simply function to mark spaces between the numbers – they are the only ‘punctua- tion’ in this simple notation. So for example, the tape, ...

. . . 000011100111111000100 . . .

represents the sequence of numbers 3, 6, 1. In this notation, the number of 0s is irrelevant to which number is written down. The marks . . . indicate that the blank tape continues indefinitely in both directions. ...

We also need a specification of the machine’s ‘internal states’; it turns out that the simple machine we are dealing with only needs two internal states, which we might as well call state A (the initial state) and state B. The particular Turing machine we are considering has its behaviour specified by the following instructions: ...


如果机器处于状态 A 并读取 0,则它保持状态 A,写入 0,并向右移动一个方格。

If the machine is in state A, and reads a 1, then it changes to state B, writes a 1, and moves one square to the right. ...

If the machine is in state B, and reads a 0, then it changes to state A, writes a 1 and stops. ...


如果机器处于状态 B 并读取 1,则它保持状态 B,写入 1,并向右移动一个方格。

The machine table for this machine will look like Figure 3.3. ...

MACHINE STATE ...


输入

Change to B; Write a 1;

Move tape to right

Stay in A; Write a 0;

Move tape to right

Stay in B; Write a 1;

Move tape to right

Change to A; Write a 1; STOP

10 ...

A ...

B ...

Figure 3.3 A machine table for a simple Turing machine. ...

Let’s now imagine presenting the machine with part of a tape that looks like this: ...

0

0

0

1

1

0

0

0

This tape represents the number 2. (Remember, the 0s merely serve as ‘punctuation’, they don’t represent any number in this notation.) What we want the machine to do is add 1 to this number, by apply- ing the rules in the machine table. ...

This is how it does it. Suppose it starts off in the initial state, state A, reading the square of tape at the extreme right. Then it follows the instructions in the table. The tape will ‘look’ like this during this process (the square of the tape currently being read by the machine is underlined): ...

(i)0 0 ...

0

1

1

0

0

0

. . . ...

(ii) . . ...

0

0

1

1

0

0

0 . . . ...

(iii) . . ...

0

0

0

1

1

0

0 0 . . . ...

(iv) . . ...

.

0

0

0

1

1

0 0 0 . . . ...

(v) . . ...

.

.

0

0

0

1

1 0 0 0 . . . ...

(vi)0 ...

0

0

1

1

0

0

0 . . . ...

(vii)0 ...

0

1

1

1

0

0

0 . . . ...

At line (vi), the machine is in state B, it reads a 0, so it writes a 1, changes to state A, and stops. The ‘output’ is on line (vii): this represents the number 3, so the machine has succeeded in its task of adding 1 to its input. ...

But what, you may ask, has this machine really done? What is the point of all this tedious shuffling around along an imaginary tape? Like our example of an algorithm for multiplication above, it seems a laborious way of doing something utterly trivial. But, as with our algorithm, the point is not trivial. What the machine has done is compute a function. It has computed the function x + 1 for the argu- ment 2. It has computed this function by using only the simplest possible ‘actions’, the ‘actions’ represented by the four squares of the machine table. And these are only combinations of the very simple steps that were part of the definition of all a Turing machine can do (read, write, change state, move the tape). I shall explain the lesson of this in a moment. ...


您可能想知道“内部国家”在这一切中的作用。 通过谈论它的“内部”状态,不是通过 谈论 它的“内部”状态来偷运到这个非常简单的设备的描述中吗? 也许他们是做计算的人?我认为这种担忧是非常自然的;但它放错了地方。机器的内部状态与机器表上所说的完全不同。根据定义,内部状态 B 是这样的状态:如果机器得到 1 作为输入,机器就会做某某;这样,如果它得到 0 作为输入,机器就会做这样那样的事情。这就是这些州的全部内容。(因此,“内部”可能具有误导性,因为它暗示各州具有“隐藏的性质”。


设计一个 执行复杂运算图灵例如我们上一节 乘法算法),我们需要一个更复杂的 机器表,更多的内部状态,更多tape更复杂的符号。但是我们不需要任何更复杂的基本操作。 我们无需深入研究更复杂的图灵机的细节,因为基本点可以通过我们的简单加法器来说明然而,详述符号问题是重要的。


我们的囚犯计数数字符号有许多明显的缺点。 一个 不能表示 0 这是一个很大的缺点。另一个 非常的数字需要 很长时间计算,因为机器一次只能读取一个方格。(数字加 17,000,000 需要一个具有


伦敦居民 更有效的系统二进制系统或基数 ',其中所有自然数都 1 0 的组合表示。 回想一下二进制表示法在标准的 'denary' 系统(以 10 为基数10倍数占据 2 倍数占据我们提供了以下从 denary 到 binary 的转换:

1 =

1

2 =

10

3 =

11

4 =

100

5 =

101

6 =

110

7 =

111

8 = 1000


等等显然,二进制编码数字使我们能够比囚犯的计数更有效地表示更大的数字

An advantage of using binary notation is that we can design Turing machines of great complexity without having to add more symbols to the basic repertoire. We started off with two kinds of symbols, 0 and 1. In our prisoner’s tally notation, the 0s merely served to divide the numbers from each other. In base 2, the 0s serve as numerals, enabling us to write any number as a string of 1s and 0s. But notice that the machine still only needs the same number of basic operations: read a 1, write a 1, read a 0, write a 0, move the tape. So using base 2 gives us the potential of representing many more numbers much more efficiently without having to add more basic operations to the machine. (Obviously we need punctuation too, to show where one instruction or piece of input stops and another one starts. But, with sufficient ingenuity, we can code these as 1s and 0s too.) ...

We are now on the brink of a very exciting discovery. With an adequate notation, such as binary, not only the input to a Turing machine (the initial tape) but the machine table itself can be coded as numbers in the notation. To do this, we need a way of labelling the distinct operations of the machine (read, write, etc.), and the ‘internal states’ of the machine, with numbers. We used the labels ‘A’ and ‘B’ for the internal states of our machine. But this was purely arbitrary: we could have used any symbols whatsoever for these states: %, @, *, or whatever. So we could also use numbers to rep- resent these states. And if we use base 2, we can code these internal states and ‘actions’ as 1s and 0s on a Turing machine tape. ...


因为任何图灵都完全其机器定义任何图灵都可以进行数值编码,因此显然任何图灵都可以进行数值编码编码。所以台机器可以用二进制编码另一台图灵磁带上。 所以另一台图灵机可以第一图灵磁带作为它的输入:它可以读取第一台图灵机。它所需要的只是一种方法,将第一台图灵机(程序)的磁带上描述的操作转换为 自己的 操作。 这只是另一个机器表,本身是可以编码的。例如假设我们将 'add 1' 机器编码为二进制。然后它可以在磁带上表示为 1 和 0 的字符串。 如果我们在磁带上添加一些 1 和 0 来表示一个数字(比如 127),那么这些,加上我们的“加1”机器的编码,就可以成为另一台图灵输入。这台机器本身有一个程序解释我们的 'add 1' 机器。然后它可以完全按照我们的 '加1' 机器做:它可以1加到输入的数字 127 上。它将通过“模仿”我们原来的 “add 1” 机器的行为来实现这一点。

Now, the exciting discovery is this: there is a Turing machine which can mimic the behaviour of any other Turing machine. Because any Turing machine can be numerically coded, it can be fed in as the input to another Turing machine, so long as that machine has a way of reading its tape. Turing proved from this that, to perform all the operations that Turing machines can perform, we don’t need a separate machine for each operation. We need only ...

one machine that is capable of mimicking every other machine. This machine is called a universal Turing machine. And it is the idea of a universal Turing machine that lies behind modern general purpose digital computers. In fact, it is not an exaggeration to say that the idea of a universal Turing machine has probably affected the character of all our lives. ...

However, to say that a universal Turing machine can do anything that any particular Turing machine can do only raises the question: what can particular Turing machines do? What sorts of operations can they perform, apart from the utterly trivial one I illustrated? ...

Turing claimed that any computable function can in principle be computed on a Turing machine, given enough tape and enough time. That is, any algorithm could be executed by a Turing machine. Most logicians and mathematicians now accept the claim that to be an algorithm is simply to be capable of execution on some Turing machine, i.e. being capable of execution on a Turing machine in some sense tells us what an algorithm is. This claim is called Church’s thesis after the American logician Alonzo Church (b. 1903), who independently came to conclusions very similar to those of Turing. (It is sometimes called the Church–Turing thesis.)6 The basic idea of the thesis is, in effect, to give a precise sense to the notion of an algorithm, to tell us what an algorithm is. ...

You may still want to ask: how has the idea of a Turing machine told us what an algorithm is? How has it helped to appeal to these interminable ‘tapes’ and the tedious strings of 1s and 0s written on them? Turing’s answer could be put as follows: what we have done is reduced anything which we naturally recognise as an effective procedure to a series of simple steps performed by a very simple device. These steps are so simple that it is not possible for anyone to think of them as mysterious. What we have done, then, is to make the idea of an effective procedure unmysterious. ...

Coding and symbols ...

A Turing machine is a certain kind of input–output device. You put a certain thing ‘into’ the machine a tape containing a string of 1s ...


和 0 – 然后你得到另一个东西 – 一个包含另一个 1 和 0 字符串的磁带。在此期间,机器对输入(由其机器表或指令决定的事情)执行某些操作,以将其转化为输出。


然而 可能 让您担心一件事不是图灵机的定义,而是这样一个可以执行任何算法的想法 很容易看出它是如何执行 'add 1' 算法的,只要稍加想象力,我们就可以看到如何执行前面描述乘法算法但也表示您可以为简单的食谱编写算法,例如煮鸡蛋,或者用于找出哪个钥匙打开某个锁。图灵机如何做到这一点?图灵机肯定只能用数字计算,因为这就是可以写在磁带上的全部内容吗?


当然,图灵不能鸡蛋,不能打开门。但是提到算法是对如何鸡蛋描述。如果给出了正确的符号,这些描述可以编码到图灵机中。


如何?这是一种简单的方法。我们的算法是 用英文编写的,因此首先我们需要一种方法将英文文本中的指令编码数字。我们可以简单地英文字母表的每个字母每个重要的标点符号与一个数字相关联,如下所示:


A1、B2、C–3、D4


所以我的名字写成:


209 13

3 18 1 14 5


显然,标点符号至关重要。 我们需要 一种来表示一个字母何时停止另一个字母何时开始,另一种何时一个单词停止而另一个单词何时开始,还有另一种方式来知道整个文本(例如机器表)何时停止而另一个开始。但这并不构成原则问题(想想老式的电报是如何使用单词作为标点符号的,例如 separat-

ing sentences with ‘STOP’.) Once we’ve coded a piece of text into numbers, we can rewrite these numbers in binary. ...

So we could then convert any algorithm written in English (or any other language) into binary code. And this could then be writ- ten on a Turing machine’s tape, and serve as input to the universal Turing machine. ...

Of course, actual computer programmers don’t use this system of notation for text. But I’m not interested in the real details at the mo- ment: the point I’m trying to get across is just that once you realise that any piece of text can be coded in terms of numbers, then it is obvious that any algorithm that can be written in English (or in any other language) can be run on a Turing machine. ...

This way of representing is wholly digital, in the sense that each represented element (a letter, or word) is represented in an entirely ‘on–off’ way. Any square on a Turing machine’s tape has either a 1 on it or a 0. There are no ‘in-between’ stages. The opposite of digital form of representation is the analogue form. The distinction is best illustrated by the familiar example of analogue and digital clocks. Digital clocks represent the passage of time in a step-by-step way, with distinct numbers for each second (say), and nothing in between these numbers. Analogue clocks, by contrast, mark the passage of time by the smooth movement of a hand across the face. Analogue computers are not directly relevant to the issues raised here the computers discussed in the context of computers and thought are all digital computers. ...7

We are now, finally, getting close to our characterisation of computers. Remember that I said that a computer is a device that processes representations in a systematic way. To understand this, we needed to give a clear sense to two ideas: (i) ‘processes in a systematic way’ and (ii) ‘representation’. The first idea has been explained in terms of the idea of an algorithm, which has in turn been illuminated by the idea of a Turing machine. The second idea is implicit in the idea of the Turing machine: for the machine to be understood as actually computing a function, the numbers on its tape have to be taken as standing for or representing something. Other representations – e.g. English sentences – can then be coded into these numbers. ...

Sometimes computers are called information processors. Sometimes they are called symbol manipulators. In my terminology, this is the same as saying that computers process representations. Representations carry information in the sense that they ‘say’ something, or are interpretable as ‘saying’ something. That is what computers process or manipulate. How they process or manipulate is by carrying out effective procedures. ...

Instantiating a function and computing a function ...

This talk of representations now enables us to make a very impor- tant distinction that is crucial for understanding how the idea of computation applies to the mind. ...8


请记住 函数 的概念 可以扩展到数学之外。 例如,在科学理论中科学家经常功能来描述世界考虑一个著名的简单例子:牛顿第二运动定律,它说物体加速度取决于它的质量和施加在身上的可以表示F=,读数“力=质量


acceleration“的方面的细节并不重要:关键作用在某个物体上的一个或多个力将等于质量乘以加速度。一个数学函数 - 乘法 - 其参数数字,可以表示质量、加速度自然界中的关系 自然界中的这种关系也是一种函数:物体的加速度是其质量和施加在它身上的力的函数。 为简单起见,我们将其称为“牛顿函数”。

But, when a particular mass has a particular force exerted upon it, and accelerates at a certain rate, it does not compute the value of Newton’s function. If it did, then every force–mass–acceleration relationship in nature would be a computation, and every physical object a computer. Rather, as I shall say, a particular interaction instantiates the function: that is, it is an instance of Newton’s func- tion. Likewise, when the planets in the solar system orbit the sun, they do so in a way that is a function of gravitational and inertial ‘input’. Kepler’s laws are a way of describing this function. But the ...

solar system is not a computer. The planets do not ‘compute’ their orbits from the input they receive: they just move. ...

So the crucial distinction we need is between a system’s in- stantiating a function and a system’s computing a function. By ‘instantiating’ I mean ‘being an instance of’ (if you prefer, you could substitute ‘being describable by’). Compare the solar system with a real computer, say a simple adding machine. (I mean an actual physical adding machine, not an abstract Turing ‘machine’.) It’s natural to say that an adding machine computes the addition function by taking two or more numbers as input (arguments) and giving you their sum as output (value). But, strictly speaking, this is not what an adding machine does. For, whatever numbers are, they aren’t the sort of thing that can be fed into machines, manipulated or transformed. (For example, you don’t destroy the number 3 by destroying all the 3s written in the world; that doesn’t make sense.) What the adding machine really does is take numerals that is, representations of numbers – as input, and gives you numerals as output. This is the difference between the adding machine and the planets: although they instantiate a function, the planets do not employ representations of their gravitational and other input to form representations of their output. ...

Computing a function, then, requires representations: representa- tions as the input and representations as the output. This is a per- fectly natural way of understanding ‘computing a function’: when we compute with pen and paper, for example, or with an abacus, we use representations of numbers. As Jerry Fodor has said: ‘No computation without representation!’. ...9

How does this point relate to Turing machines and algorithms? A Turing machine table specifies transitions between the states of the machine. According to Church’s thesis, any procedure that is step-by-step algorithmic can be modelled on a Turing machine. So, any process in nature which can be represented in a step-by-step fashion can be represented by a Turing machine. The machine merely specifies the transitions between the states involved in the process. But this doesn’t mean that these natural processes are com- putations, any more than the fact that physical quantities such as ...

my body temperature can be represented by numbers means that my body temperature actually is a number. If a theory of some natural phenomenon can be represented algorithmically, then the theory is said to be computable – but this is a fact about theories, not about the phenomena themselves. The idea that theories may or may not be computable will not concern us any further in this book.10 ...

Without wishing to labour the point, let me emphasise that this is why we needed to distinguish at the beginning of this chapter between the idea that some systems can be modelled on a computer and the idea that some systems actually perform computations. A system can be modelled on a computer when a theory of that system is computable. A system performs computations, however, when it processes representations by using an effective procedure. ...

Automatic algorithms ...

If you have followed the discussion so far, then a very natural ques- tion will occur to you. Turing machines describe the abstract struc- ture of computation. But, in the description of Turing machines, we have appealed to ideas like ‘moving the tape’, ‘reading the tape’, ‘writing a symbol’ and so on. We have taken these ideas for granted, but how are they supposed to work? How is it that any effective procedure gets off the ground at all, without the intervention of a human being at each stage in the procedure? ...

The answer is that the computers with which we are familiar use automated algorithms. They use algorithms, and input and output representations, that are in some way ‘embodied’ in the physical structure of the computer. The last part of our account of computers will be a very brief description of how this can be done. This brief discussion cannot, of course, deal with all the major features of how actual computers work, but I hope it will be enough to give you the general idea. ...

Consider a very simple machine (not a computer) that is used for trapping mice. We can think of this mousetrap in terms of input and output: the trap takes live mice as input, and gives dead (or ...

perhaps just trapped) mice as output. A simple way of representing the mousetrap is shown in Figure 3.4. ...


从捕鼠器的简单描述来看MOUSETRAP的“盒子”有什么并不重要“盒子里”的东西就是困住老鼠的任何东西。像这样的盒子被 工程师称为“黑匣子”:当我们对某物的内部运作方式并不真正感兴趣,而只对它执行的输入输出任务感兴趣时,我们可以将其视为黑匣子。但是,当然,我们可以“闯入”捕鼠器的黑匣子,并表示它的内脏,如图 3.5 所示。


黑匣子的两个内部组件是诱饵和实际诱捕老鼠的装置(箭头表示老鼠将从诱饵移动到诱捕装置中,而不是相反)。3.4我们实际上BAITTRAPPING DEVICE 视为黑匣子。我们只对它们做什么感兴趣:诱饵是吸引老鼠的任何东西,而诱捕装置老鼠任何东西

But we can of course break into these black boxes too, and find out how they work. Suppose that our mousetrap is of the old- fashioned comic-book kind, with a metal bar held in place by a spring, which is released when the bait is taken. We can then ...

Figure 3.4 Mousetrap ‘black box’. ...


捕鼠器

Figure 3.5 The mousetrap’s innards. ...

describe the trapping device in terms of its component parts. And its component parts too SPRING, BAR etc. can be thought of as black boxes. It doesn’t matter exactly what they are; what matters is what they are doing in the mousetrap. But, these boxes too can be broken into, and we can specify in more detail how they work. What is treated as one black box at one level can be broken down into other black boxes at other levels, until we come to understand the workings of the mousetrap. ...

This kind of analysis of machines is sometimes known as ‘func- tional analysis’: the analysis of the working of the machine into the functions of its component parts. (It is also sometimes called ‘functional boxology’.) Notice, though, that the word ‘function’ is being used in a different sense than in our earlier discussion: here, the function of a part of a system is the causal role it plays in the system. This use of ‘function’ corresponds more closely to the every- day use of the term, as in ‘what’s the function of this bit?’. ...

Now back to computers. Remember our simple algorithm for multiplication. This involved a number of tasks, such as writing symbols on the X and Y pieces of paper, and adding and subtract- ing. Now think of a machine that carries out this algorithm, and let’s think of how to functionally analyse it. At the most general level, of course, it is a multiplier. It takes numerals as input and gives you their products as output. At this level, it may be thought of as a black box (see Figure 3.6). ...

But this doesn’t tell us much. When we ‘look’ inside the black box, what is going on is what is represented by the flow chart (Figure 3.7). Each box in the flow chart represents a step performed by the machine. But some of these steps can be broken down into simpler steps. For example, step (iv) involves adding the number written on Y to the ANSWER. But adding is also a step-by-step procedure, ...

Numeral Numeral ...
产品


乘数

Figure 3.6 Multiplier black box. ...

(i) ...

(v) ...

(iv) ...


(三)

Figure 3.7 Flow chart for the multiplication algorithm again. ...

and so we can write a flow chart for this too. Likewise with the other steps: subtracting, ‘reading’ and so on. When we functionally analyse the multiplier, we find out that its tasks become simpler and simpler, until we get down to the simplest tasks it can perform. ...

Daniel Dennett has suggested a vivid way of thinking of the architecture of computers. Imagine each task in the flow chart’s boxes being performed by a little man, or ‘homunculus’. The biggest box (labelled Multiplier in Figure 3.6) contains a fairly intelligent homunculus, who, say, multiplies numbers expressed in denary notation. But inside this homunculus are other, less intelligent, homunculi who can do only addition and subtraction, and writing denary symbols on the paper. Inside these other homunculi are even more stupid homunculi who can translate denary notation into binary. And inside these are really stupid homunculi who can only read, write or erase binary numerals. Thus, the behaviour of the intelligent multiplier is functionally explained by postulating progressively more and more stupid homunculi.11 ...

If we have a way of making a real physical device that func- tions as a simple device – a stupid homunculus – we can build up combinations of these simple devices into complex devices that can perform the task of the multiplier. After all, the multiplier is nothing ...

more than these simple devices arranged in the way specified by the flow chart. Now, remember that Turing’s great insight was to show that any algorithm could be broken down into tasks simple enough to be performed by a Turing machine. So let’s think of the simplest devices as the devices which can perform these simple Turing machine operations: move from left or right, read, write, etc. All we need to do now is make some devices that can perform these simple operations. ...

And, of course, we have many ways of making them. For vivid- ness, think of the tape of some Turing machine represented by an array of switches: the switch being on represents 1 and the switch being off represents 0. Then any computation can be performed by a machine that can move along the switches one by one, register which position they are in (‘reading’) and turn them on or off (‘writ- ing’). So long as we have some way of programming the machine (i.e. telling it which Turing machine it is mimicking), then we have built a computer out of switches. ...

Real computers are, in a sense, built out of ‘switches’, although not in the simple way just described. One of the earliest computers (built in 1944) used telephone relays, while the Americans’ famous war effort ENIAC (used for calculating missile trajectories) was built using valves; and valves and relays are, in effect, just switches. The real advances came when the simplest processors (the ‘switches’) could be built out of semi-conductors, and computations could be performed faster than Turing ever dreamed of. Other major ad- vances came with high-level ‘programming languages’: systems of coding that can make the basic operations of the machine perform all sorts of other more complex operations. But, for the purposes of this book, the basic principle behind even these very complex machines can be understood in the way I have outlined. (For more information about the history of the computer, see the chronology at the end of this book.) ...


这样做 的一个重要结果是 计算机是由什么组成的并不重要对于作为一台计算机来说,重要的是做什么——也就是说它执行什么计算任务或者正在运行什么程序我们使用的计算机


如今,人们使用蚀刻在微小 上的微观电子电路来执行这些任务 但是,尽管这项技术 非常高效,原则所执行的任务能够通过一系列开关、珠子、火柴棒和锡来完成甚至也许是通过大脑神经化学这个想法被称为物理机制(硬件)程序(或软件)“可变实现”(“多重实现”),同一个程序可以可变 地或多次地“实现” 通过不同的硬件。

I should add one final point about some real computers. It is a simplification to say that all computers work entirely algorithmically. When people build computer programs to play chess, for example, the rules of chess tell the machine, entirely unambiguously, what counts as a legal move. At any point in the game only certain moves are allowed by the rules. But how does the machine know which move to make, out of all the possible moves? As a game of chess will come to an end in a finite – though possibly very large – number of moves, it is possible in principle for the machine to scan ahead, figuring out every consequence of every permitted move. However, this would take even the most powerful computer an enormous (to put it mildly) amount of time. (John Haugeland estimates that the computer would have to look ahead 10120 moves which is a larger number than the number of quantum states in the whole history of the universe.12) So, designers of chess-playing programs add to their machines certain rules of thumb (called heuristics) that suggest good courses of action, though, unlike algorithms, they do not guarantee a particular outcome. A heuristic for a chess-playing machine might be something like, ‘Try and castle as early in the game as possible’. Heuristics have been very influential in artificial intelligence research. It is time now to introduce the leading idea behind artificial intelligence: the idea of a thinking computer. ...

Thinking computers? ...

Equipped with a basic understanding of what computers are, the question we now need to ask is: why would anyone think that ...

being a computer processing representations systematically can constitute thinking? ...

At the beginning of this chapter, I said that to answer the ques- tion, ‘Can a computer think?’, we need to know three things: what a computer is, what thinking is and what it is about thought and com- puters that supports the idea that computers might think. We now have something of an idea of what a computer is, and in Chapters 1 and 2 we discussed some aspects of the common-sense conception of thought. Can we bring these things together? ...

There are a number of obvious connections between what we have learned about the mind and what we have learned about com- puters. One is that the notion of representation seems to crop up in both areas. One of the essential features of certain states of mind is that they represent. And in this chapter we have seen that one of the essential features of computers is that they process representations. Also, your thoughts cause you to do what you do because of how they represent the world to be. And it is arguable that computers are caused to produce the output they do because of what they represent: my adding machine is caused to produce the output 5 in response to the inputs 2, +, 3 and =, partly because those input symbols represent what they do. ...

However, we should not get too carried away by these similari- ties. The fact that the notion of representation can be used to define both thought and computers does not imply anything about whether computers can think. Consider this analogy: the notion of represen- tation can be used to define both thought and books. It is one of the essential features of books that they contain representations. But books can’t think! Analogously, it would be foolish to argue that computers can think simply because the notion of representation can be employed in defining thought and computers. ...


另一种 的方式是将“信息处理”的概念理解得过于松散。 某种意义上说,隐蔽的思考确实涉及信息的处理——我们从环境中获取信息,对它做事,并利用它在世界上行动。但是,从这一点加上计算机被称为“信息处理器”这一事实得出结论是错误的


计算机发生的事情必须是一种思维。这依赖于在将“信息处理”应用于人类思维以非常宽松的方式进行理解计算理论,“信息处理”有一个精确的定义。关于思考计算机的问题(部分)是关于计算机所做的信息处理是否 与思维中涉及的“信息处理”有任何关系。而这个问题的回答不能通过指出“信息处理”这个词既可以应用于计算机,也可以应用于思想:这被称为“模棱两可的谬误”。

Another bad way to argue, as we have already seen, is to say that computers can think because there must be a Turing machine table for thinking. To say that there is a Turing machine table for thinking is to say that the theory of thinking is computable. This may be true; or it may not. But, even if it were true, it obviously would not imply that thinkers are computers. Suppose astronomy were computable: this would not imply that the universe is a computer. Once again, it is crucial to emphasise the distinction between computing a func- tion and instantiating a function. ...

On the other hand, we must not be too quick to dismiss the idea of thinking computers. One familiar debunking criticism is that people have always thought of the mind or brain along the lines of the latest technology; and the present infatuation with thinking computers is no exception. This is how John Searle puts the point: ...

Because we do not understand the brain very well we are constantly tempted to use the latest technology as a model for trying to under- stand it. In my childhood we always assured that the brain was a tel- ephone switchboard . . . Sherrington, the great British neuroscientist, thought that the brain worked like a telegraph system. Freud often compared the brain to hydraulic and electro-magnetic systems. Leibniz compared it to a mill, and I am told that some of the ancient Greeks thought the brain functions like a catapult. At present, obviously, the metaphor is the digital computer.13 ...

Looked at in this way, it seems bizarre that anyone should think that the human brain (or mind), which has been evolving for millions ...

of years, should have its mysteries explained in terms of ideas that arose some sixty or seventy years ago in rarified speculation about the foundations of mathematics. ...

But, in itself, the point proves nothing. The fact that an idea evolved in a specific historical context and which idea didn’t? doesn’t tell us anything about the correctness of the idea. However, there’s also a more interesting specific response to Searle’s criticism. It may be true that people have always thought of the mind by anal- ogy with the latest technology. But the case of computers is very different from the other cases that Searle mentions. Historically, the various stages in the invention of the computer have always gone hand in hand with attempts to systematise aspects of human knowledge and intellectual skills so it is hardly surprising that the former came to be used to model (or even explain) the latter. This is not so with hydraulics, or with mills or telephone exchanges. It’s worth dwelling on a few examples. ...

Along with many of his contemporaries, the great philosopher and mathematician G.W. Leibniz (1646–1716) proposed the idea of a ‘universal character’ (characteristica universalis): a mathematically precise, unambiguous language into which ideas could be translated, and by means of which the solutions to intellectual disputes could be resolved by ‘calculation’. In a famous passage, Leibniz envisages the advantages that such a language would bring: ...

Once the characteristic numbers are established for most concepts, mankind will then possess a new instrument which will enhance the capabilities of the mind to a far greater extent than optical instru- ments strengthen the eyes, and will supersede the microscope and telescope to the same extent that reason is superior to eyesight.14 ...

Leibniz did not get as far as actually designing the universal charac- ter (though it is interesting that he did invent binary notation). But with the striking image of this concept-calculating device we see the combination of interests which have preoccupied many computer pioneers: on the one hand, there is a desire to strip human thought of all ambiguity and unclarity; while, on the other, there is the idea of a calculus or machine that could process these skeletal thoughts. ...

These two interests coincide in the issues surrounding another major figure in the computer’s history, the Irish logician and math- ematician George Boole (1815–1864). In his book The Laws of Thought (1854), Boole formulated an algebra to express logical relations between statements (or propositions). Just as ordinary algebra represents mathematical relations between numbers, Boole proposed that we think of the elementary logical relations between statements or propositions – expressed by words such as ‘and’, ‘or’, etc. as expressible in algebraic terms. Boole’s idea was to use a binary notation (1 and 0) to represent the arguments and values of the functions expressed by ‘and’, ‘or’, etc. For example, take the bi- nary operations 1 0 = 0 and 1 + 0 = 1. Now, suppose that 1 and 0 represent true and false respectively. Then we can think of 1 0 = 0 as saying something like, ‘If you have a truth and a falsehood, then you get a falsehood’ and 1 + 0 = 1 as saying ‘If you have a truth or a falsehood, then you get a truth’. That is, we can think of as representing the ‘truth-function’ and, and think of + as representing the truth-function or. (Boole’s ideas will be familiar to students of elementary logic. A sentence ‘P and Q’ is true just in case P and Q are both true, and ‘P or Q’ is true just in case P is true or Q is true.) Boole claimed that, by building up patterns of reasoning out of these simple algebraic forms, we can discover the ‘fundamental laws of those operations of the mind by which reason is performed’.15 That is, he aimed to systematise or codify the principles of human thought. The interesting fact is that Boole’s algebra came to play a central role in the design of modern digital computers. The behav- iour of the function in Boole’s system can be coded by a simple device known as an ‘and-gate’ (see Figure 3.8). An and-gate is a mechanism taking electric currents from two sources (X and Y) as inputs, and giving one electric current as output (Z). The device is designed in such a way that it will output a current at Z when, and ...

X Y ...

X and Y ...

Figure 3.8 An ‘and-gate’. ...

only when, it is receiving a current from both X and Y. In effect, this device represents the truth function ‘and’. Similar gates are con- structed for the other Boolean operations: in general, these devices are called ‘logic gates’ and are central to the design of today’s digital computers. ...

Eventually, the ideas of Boole and Leibniz, and other great innovators, such as the English mathematician Charles Babbage (1792–1871), gave birth to the idea of the general-purpose program- mable digital computer. The idea then became reality in the theo- retical discoveries of Turing and Church, and in the technological advances in electronics of the post-war years (see the chronology at the end of the book for some more details). But, as the cases of Boole and Leibniz illustrate, the ideas behind the computer, however vague, were often tied up with the general project of understanding human thought by systematising or codifying it. It was only natural, then, when the general public became aware of computers, that they were hailed as ‘electronic brains’.16 ...

These points do not, of course, justify the claim that computers can think. But they do help us see what is wrong with some hasty reactions to this claim. In a moment we will look at some of the detailed arguments for and against it. But first we need to take a brief look at the idea of artificial intelligence itself. ...

Artificial intelligence ...

What is artificial intelligence? It is sometimes hard to get a straight answer to this question, as the term is applied to a number of differ- ent intellectual projects. Some people call artificial intelligence (or AI) the ‘science of thinking machines’, while others, e.g. Margaret Boden, are more ambitious, calling it ‘the science of intelligence in general’.17 To the newcomer, the word ‘intelligence’ can be a bit misleading here, because it suggests that AI is interested only in tasks which we would ordinarily classify as requiring intelligence e.g. reading difficult books or proving theorems in mathemat- ics. In fact, a lot of AI research concentrates on matters which we wouldn’t ordinarily think of as requiring intelligence, such as seeing three-dimensional objects or understanding simple text. ...

Some of the projects that go under the name of AI have little to do with thought or thinking computers. For example, there are the so-called ‘expert systems’, which are designed to give advice on specialised areas of knowledge e.g. drug diagnosis. Sophisticated as they are, expert systems are not (and are not intended to be) thinking computers. From the philosophical point of view, they are simply souped-up encyclopaedias. ...


AI 背后的哲学有趣想法 构建思考的计算机(或任何其他机器,此而言)的想法。 显然, 本身是一个有趣的问题; 但是,如果 Boden 和其他人是对的,那么建造一台会思考的计算机的项目应该可以帮助我们理解智能(或思想)通常是什么也就是说,通过构建一台会思考的计算机,我们可以学习思想。

It may not be obvious how this is supposed to work. How can building a thinking computer tell us about how we think? Consider an analogy: building a flying machine. Birds fly, and so do aero- planes; but building aeroplanes does not tell us very much about how birds manage to fly. Just as aeroplanes fly in a different way from the way birds do, so a thinking computer might think in a different way from the way we do. So how can building a thinking computer in itself tell us much about human thought? ...

On the other hand, this argument might strike you as odd. After all, thinking is what we do the essence of thinking is human thinking. So how could anything think without thinking in the way we do? This is a good question. What it suggests is that, instead of starting off by building a thinking computer and then asking what this tells us about thought, we should first figure out what thinking is, and then see if we can build a machine which does this. However, once we had figured out what thinking is, building the machine wouldn’t then tell us anything we didn’t already know! ...

If the only kind of thinking were human thinking (whatever this means exactly) then it would only be possible to build a think- ing computer if human thinking actually were computational. To establish this, we would obviously have to investigate in detail what thinking and other mental processes are. So this approach will need ...

a psychological theory behind it: for it will need to figure out what the processes are before finding out what sort of computational mechanisms carry out these processes. The approach will then in- volve a collaboration between psychology and AI, to provide the full theory of human mental processing. I’ll follow recent terminol- ogy in calling this collaboration ‘cognitive science’ this will be topic of Chapter 4.18 ...

On the other hand, if something could think, but not in the way we do, then AI should not be constrained by finding out about how human psychology works. Rather, it should just go ahead and make a machine that performs a task with thought or intelligence, regard- less of the way we do it. This was, in fact, the way that the earliest AI research proceeded after its inception in the 1950s. The aim was to produce a machine that would do things that would require thought if done by people. They thought that doing this would not require detailed knowledge of human psychology or physiology.19 ...

One natural reaction to this is that this approach can only ever produce a simulation of thought, not the real thing. For some, this is not a problem: if the machine could do the job in a intelligent- seeming way, then why should we worry about whether it is the ‘real thing’ or not? However, this response is not very helpful if AI really is supposed to be the ‘science of intelligence in general’, as, by blurring the distinction between real thought and simulation, it won’t be able to tell us very much about how our (presumably real) thought works. So how could anyone think that it was acceptable to blur the distinction between real thought and its simulation? ...

The answer, I believe, lies in the early history of AI. In 1950, Turing published an influential paper called ‘Computing Machinery and Intelligence’, which provided something of the philosophical basis of AI. In this paper, Turing addressed the question, ‘Can a ma- chine think?’. Finding this question too vague, he proposed replacing it with the question: ‘Under what circumstances would a machine be mistaken for a real thinking person?’. Turing devised a test in which a person is communicating at a distance with a machine and another person. Very roughly, this ‘Turing test’ amounts to this: if the first person cannot tell the difference between the conversation ...

with the other person and the conversation with the machine, then we can say that the machine is thinking. ...

There are many ramifications of this test, and spelling out in detail what it involves is rather complicated.20 My own view is that the assumptions behind the test are behaviouristic (see Chapter 2, ‘Understanding other minds’, p. 47) and that the test is therefore inadequate. But the only point I want to make here is that accepting the Turing test as a decisive test of intelligence makes it possible to separate the idea of something thinking from the idea of something thinking in the way humans do. If the Turing test is an adequate test of thought, then all that is relevant is how the machine performs in the test. It is not relevant whether the machine passes the test in the way that humans do. Turing’s redefinition of the question ‘Can a machine think?’ enabled AI to blur the distinction between real thought and its mere simulation. ...

This puts us in a position to distinguish between the two ques- tions I raised at the beginning of this chapter: ...


计算机能思考吗?也就是说,某物可以简单地通过成为计算机来思考吗?

Is the human mind a computer? That is, do we think (in whole or in part) by computing? ...

These questions are distinct, because someone taking the lat- ter kind of AI approach could answer ‘Yes’ to 1 while remaining agnostic on 2 (‘I don’t know how we manage to think, but here’s a computer that can!’). Likewise, someone could answer ‘Yes’ to ques- tion 2 while denying that a mere computer could think. (‘Nothing could think simply by computing; but computing is part of the story about how we think.’) ...

Chapter 4 will deal with question 2, while the rest of this chapter will deal with some of the most interesting philosophical reasons for saying ‘No’ to question 1. For the sake of clarity, I will use the terms ‘AI’ and ‘artificial intelligence’ for the view that computers can think – but it should be borne in mind that these term are also used in other ways. ...

How has philosophy responded to the claims of AI, so defined? ...

Two philosophical objections stand out: ...

Computers cannot think because thinking requires abilities that computers by their very nature can never have. Computers have to obey rules (whether algorithms or heuristics), but thinking can never be captured in a system of rules, no matter how complex. Thinking requires rather an active engagement with life, participation in a culture and ‘know-how’ of the sort that can never be formalised by rules. This is the approach taken by Hubert Dreyfus in his blistering critique of AI, What Computers Can’t Do ....

Computers cannot think because they only manipulate symbols according to their formal features; they are not sensitive to the meanings of those symbols. This is the theme of a well-known argument by John Searle: the ‘Chinese room’. ...

In the final two sections of this chapter, I shall assess these objec- tions.21 ...

Can thinking be captured by rules and representations? ...

The Arizona Daily Star for 31 May 1986 reported this unfortunate story: ...

A rookie bus driver, suspended for failing to do the right thing when a girl suffered a heart attack on his bus, was following overly strict rules that prohibit drivers from leaving their routes without permission, a union official said yesterday. ‘If the blame has to be put anywhere, put it on the rules that those people have to follow’ [said the official]. [A spokesman for the bus company defended the rules]: ‘You give them a little leeway, and where does it end up?’22 ...

The hapless driver’s behaviour can be used to illustrate a perennial problem for AI. By sticking to the strict rule ‘only leave your route if you have permission’ the driver was unable to deal with the emergency in an intelligent, thinking way. But computers must, by ...

their very nature, stick to (at least some) strict rules and, therefore, will never be able to behave with the kind of flexible, spontaneous responses that real thinkers have. The objection concludes that thinking cannot be a matter of using strict rules; so computers cannot think. ...


这个反对意见有点快。为什么问题不在于 选择的特定规则,而在于遵循规则本身的想法呢?示例中规则 “Only leave your route if you have permission” 的问题在于它太简单了,而不是它是一个规则。公交公司应该给司机一条规则,更像是:“只有在获得许可的情况下才能离开您的路线,除非车上发生医疗紧急情况,在这种情况下,您应该开车去最近的医院”。这条规则处理心脏病发作的情况——但如果司机知道最近的医院被恐怖分子围攻怎么办?或者如果他知道船上有医生怎么办?他应该遵守医院的规定吗?可能不是——但是,如果他不应该,那么他应该遵守其他规则吗?但这是什么规则呢?


假设公交公司应该向司机提出这样一条规则是荒谬的,“只有在你有残疾的情况下才离开你的路线除非车上发生医疗紧急情况在这种情况下,你应该开车去最近的医院,除非医院被国际恐怖分子围攻,或者除非有医生板,或者 . . .在这种情况下,您应该 . . .' —— 我们甚至不知道如何填充点。我们如何才能制定一条足够具体的规则,为遵循它的人提供关于该怎么做的确切指示(例如,“开车到最近的医院”而不是“做一些明智的事情”),但又足够通用以适用于所有可能性(例如,不仅适用于心脏病发作,而且适用于一般的紧急情况)?


在他的文章《政治与英语》中,乔治·奥威尔(George Orwell)给出了一些优秀写作规则(例如,“永远不要长词,而短词就可以了”),并以规则结尾:“早点打破这些规则中的任何一个,而不是说出任何直截了当的野蛮话”。23我们可以在给公交车司机的一堆规则中添加一条类似的规则:“早点打破这些规则中的任何一条,也不愿做任何蠢事”。或者,更礼貌地说,“运用你的常识!

With human beings, we can generally rely on them to use their common sense, and it’s hard to know how we could understand problems like the bus driver’s without appealing (at some stage) to something like common sense, or ‘what it’s reasonable to do’. If a computer were to cope with a simple problem like this, it will have to use common sense too. But computers work by manipulating representations according to rules (algorithms or heuristics). So, for a computer to deal with the problem, common sense will have to be stored in the computer in terms of rules and representations. What AI needs, then, is a way of programming computers with explicit representations of common-sense knowledge. ...

This is what Dreyfus says can’t be done. He argues that hu- man intelligence requires ‘the background of common-sense that adult human beings have by virtue of having bodies, interacting skilfully with the material world, and being trained in a culture’.24 And, according to Dreyfus, this common-sense knowledge cannot be represented as ‘a vast base of propositional knowledge’, i.e. as a bunch of rules and representations of facts.25 ...

The chief reason why common-sense knowledge can’t be repre- sented as a bunch of rules and representations is that common-sense knowledge is, or depends on, a kind of know-how. Philosophers distinguish between knowing that something is the case and know- ing how to do something. The first kind of knowledge is a matter of knowing facts (the sorts of things that can be written in books: e.g. knowing that Sofia is the capital of Bulgaria), while the second is a matter of having skills or abilities (e.g. being able to ride a bicycle).26 Many philosophers believe that an ability such as knowing how to ride a bicycle is not something that can be entirely reduced to knowledge of certain rules or principles. What you need to have when you know how to ride a bicycle is not ‘book-learning’: you don’t employ a rules such as ‘when turning a corner to the right, then lean slightly to the right with the bicycle’. You just get the hang of it, through a method of trial and error. ...

And, according to Dreyfus, getting the hang of it is what you do when you have general intelligence too. Knowing what a chair is is not just a matter of knowing the definition of the word ‘chair’. It also ...

essentially involves knowing what to do with chairs, how to sit on them, get up from them, being able to tell which objects in the room are chairs, or what sorts of things can be used as chairs if there are no chairs around – that is, the knowledge presupposes a ‘repertoire of bodily skills which may well be indefinitely large, because there seems to be an indefinitely large variety of chairs and of successful (graceful, comfortable, secure, poised, etc.) ways to sit in them’.27 The sort of knowledge that underlies our everyday way of living in the world either is – or rests on – practical know-how of this kind. ...

A computer is a device that processes representations according to rules. And representations and rules are obviously not skills. A book contains representations, and it can contain representations of rules too – but a book has no skills. If the computer has knowl- edge, it must be ‘knowledge that so-and-so is the case’ rather than ‘knowledge of how to do so-and-so’. So, if Dreyfus is right, and general intelligence requires common sense, and common sense is a kind of know-how, then computers cannot have common sense, and AI cannot succeed in creating a computer which has general intel- ligence. The two obvious ways for the defenders of AI to respond are either to reject the idea that general intelligence requires common sense or to reject the idea that common sense is know-how. ...

The first option is unpromising – how could there be general in- telligence which did not employ common sense? – and is not popular among AI researchers.28 The second option is a more usual response. Defenders of this option can say that it requires hard work to make explicit the assumptions implicit in the common-sense view of the world; but this doesn’t mean that it can’t be done. In fact, it has been tried. In 1984, the Microelectronics and Computer Technology Corporation of Texas set up the CYC project, whose aim was to build up a knowledge base of a large amount of common-sense knowl- edge. (The name ‘CYC’ derives from ‘encyclopaedia’.) Those working on CYC attempt to enter common-sense assumptions about reality, assumptions so fundamental and obvious that they are normally overlooked (e.g. that solid objects are not generally penetrable by other solid objects etc.). The aim is to express a large percent- age of common-sense knowledge in terms of about 100 million ...

propositions, coded into a computer. In the first six years of the project, one million propositions were in place. The director of the CYC project, Doug Lenat, once claimed that, by 1994, they would have stored between thirty and fifty per cent of common-sense knowledge (or, as they call it, ‘consensus reality’).29 ...


CYC 等计划 背后的雄心受到了 Dreyfus 和其他人的严厉批评。然而,即使所有常识性知识都可以存储为一堆规则和表示,这也 只是 AI 问题的开始 因为 计算机仅仅存储信息是不够的,它必须能够检索信息并以智能的方式使用它。拥有一本百科全书是不够的——一个人必须知道如何在其中查找内容。

Crucial here is the idea of relevance. If the computer cannot know which facts are relevant to which other facts, it will not perform well in using the common sense it has stored to solve problems. But whether one thing is relevant to another thing varies as conceptions of the world vary. The sex of a person is no longer thought to be relevant to whether they have a right to vote; but two hundred years ago it was. ...

Relevance goes hand in hand with a sense of what is out of place or what is exceptional or unusual. Here is what Dreyfus says about a program intended for understanding stories about restaurants: ...

[T]he program has not understood a restaurant story the way people in our culture do, until it can answer such simple questions as: When the waiter came to the table did he wear clothes? Did he walk forward or backward? Did the customer eat his food with his mouth or his ear? If the program answers ‘I don’t know’, we feel that all its right answers were tricks or lucky guesses and that it has not understood anything of our everyday restaurant behaviour.30 ...

Dreyfus argues that it is only because we have a way of living in the world that is based on skills and interaction with things (rather than the representation of propositional knowledge or ‘knowledge that so-and-so’) that we are able to know what sorts of things are out of place, and what is relevant to what. ...

There is much more to Dreyfus’s critique of AI than this brief summary suggests but I hope this gives an idea of the general line of attack. The problems raised by Dreyfus are sometimes grouped under the heading of the ‘frame problem’,31 and they raise some of the most difficult issues for the traditional approach to AI, the kind of AI described in this chapter. There are a number of ways of responding to Dreyfus. One response is that of the CYC project: to try and meet Dreyfus’s challenge by itemising ‘consensus reality’. Another response is to concede that ‘classical’ AI, based on rules and representations, has failed to capture the abilities fundamental to thought AI needs a radically different approach. In Chapter 4, I shall outline an example of this approach, known as ‘connec- tionism’. Another response, of course, is to throw up one’s hands in despair, and give up the whole project of making a thinking machine. At the very least, Dreyfus’s arguments present a challenge to the research programme of AI: the challenge is to represent com- mon-sense knowledge in terms of rules and representations. And, at most, the arguments signal the ultimate breakdown of the idea that the essence of thought is manipulating symbols according to rules. Whichever view one takes, I think that the case made by Dreyfus licenses a certain amount of scepticism about the idea of building a thinking computer. ...

The Chinese room ...

Dreyfus argues that conventional AI programs don’t stand a chance of producing anything that will succeed in passing for general intel- ligence – e.g. plausibly passing the Turing test. John Searle takes a different approach. He allows, for the sake of argument, that an AI program could pass the Turing test. But he then argues that, even if it did, it would only be a simulation of thinking, not the real thing.32 ...

To establish his conclusion, Searle uses a thought experiment which he calls the ‘Chinese room’. He imagines himself to be inside a room with two windows – let’s label them I and O respectively. Through the I window come pieces of paper with complex markings ...

on them. In the room is a huge book written in English, in which is written instructions of the form, ‘Whenever you get a piece of paper through the I window with these kinds of markings on it, do certain things to it, and pass a piece of paper with those kind of markings on it through the O window’. There is also a pile of pieces of paper with markings inside the room. ...

Now suppose the markings are in fact Chinese characters those coming through the I window are questions, and those going through the O window are sensible answers to the questions. The situation now resembles the set-up inside a computer: a bunch of rules (the program) operates on symbols, giving out certain symbols through the output window in response to other symbols through the input window. ...

Searle accepts for the sake of argument that, with a suitable pro- gram, the set-up could pass the Turing test. From outside the room, Chinese speakers might think that they were having a conversation with the person in the room. But, in fact, the person in the room (Searle) does not understand Chinese. Searle is just manipulating the symbols according to their form (roughly, their shape) – he has no idea what the symbols mean. The Chinese room is therefore supposed to show that running a computer program can never constitute genuine understanding or thought, as all computers can do is manipulate symbols according to their form. ...

The general structure of Searle’s argument is as follows: ...

Computer programs are purely formal or ‘syntactic’: roughly, they are sensitive only to the ‘shapes’ of the symbols they proc- ess. ...

Genuine understanding (and, by extension, all thought) is sensitive to the meaning (or ‘semantics’) of symbols. ...

Form (or syntax) can never constitute, or be sufficient for, meaning (or semantics). ...

Therefore, running a computer program can never be sufficient for understanding or thought. ...

The core of Searle’s argument is premise 3. Premises 1 and 2 ...

are supposed to be uncontroversial, and the defence for premise 3 is provided by the Chinese room thought experiment. (The terms ‘syntax’ and ‘semantics’ will be explained in more detail in Chapter ...


4. 暂时,将它们分别理解为“形式”和“意义”。


Searle 的论点明显回应 ,这个类比不起作用 Searle 争辩说, 计算机无法理解中文,因为在中文房间里听不懂中文。但他的批评者回应,这不是AI应该。Searle-in-the-room 类似于 计算机的一部分,而不是计算机本身。 计算机本身 类似于 Searle + 房间 + 规则 + 其他 数据)。所以,批评者说,Searle提出AI声称计算机理解 因为它的一部分理解:但从事 AI 工作的人不会这么说。相反,他们会说整个房间(即整个计算机)都懂中文。

Searle can’t resist poking fun at the idea that a room can under- stand but, of course, this is philosophically irrelevant. His serious response to this criticism is this: suppose I memorise the whole of the rules and the data. I can then do all the things I did inside the room, except that because I have memorised the rules and the data, I can do it outside the room. But I still don’t understand Chinese. So the appeal to the room’s understanding does not answer the point. ...

Some critics object to this by saying that memorising the rules and data is not a trivial task who is to say that once you have done this you wouldn’t understand? They argue that it is failure of imagination on Searle’s part that makes him rule out this possibility. (I will return to this below.) ...

Another way of objecting to Searle here is to say that if Searle had not just memorised the rules and the data, but also started acting in the world of Chinese people, then it is plausible that he would, before too long, come to realise what these symbols mean. Suppose that the data concerned a restaurant conversation (in the style of some real AI programs), and Searle was actually a waiter in a Chinese restaurant. He would come to see, for example, that a certain symbol was always associated with requests for fried rice, ...

another one with requests for shark-fin dumplings, and so on. And this would be the beginning (in some way) of coming to see what they mean. ...

Searle’s objection to this is that the defender of AI has now con- ceded his point: it is not enough for understanding that a program is running, you need interaction with the world for genuine under- standing. But the original idea of AI, he claims, was that running a program was enough on its own for understanding. So this response effectively concedes that the main idea behind AI is mistaken. ...

Strictly speaking, Searle is right here. If you say that, in order to think, you need to interact with the world then you have abandoned the idea that a computer can think simply because it is a computer. But notice that this does not mean that computation is not involved in thinking at some level. Someone who has performed the (perhaps practically impossible) task of memorising the rules and the data is still manipulating symbols in a rule-governed or algorithmic way. It’s just that he or she needs to interact with the world to give these symbols meaning. (‘Interact with the world’ is, of course, very vague. Something more will be said about it in Chapter 5.) So Searle’s argu- ment does not touch the general idea of cognitive science: the idea that thinking might be performing computations, even though that is not all there is to it. Searle is quite aware of this, and has also provided a separate argument against cognitive science, aspects of which I shall look at in Chapter 4. ...


我们应该对 Searle 的论点得出什么结论 呢? 我认为非常 正确的一点 他在上述论点中的前提 3语法对于语义来说是不够 的。 也就是说,符号不会“解释自己”。实际上是代表性问题本身赤裸裸陈述 如果 它是假的,那么从某种意义上说,就不存在代表性的问题。这是否意味着无法解释符号如何表示它们的作用?不一定 – 第 5 章将研究一些解释但是我们必须始终小心,当我们给出这样的解释 我们不是在偷偷地介绍我们试图解释的东西(理解、意义、语义、等)。我认为这是 Searle 反对 AI 的论点的一个主要教训。


然而,一些哲学家 质疑 Searle是否有资格拥有这个前提。排除唯物论者保罗·丘奇兰(Paul Churchland)和帕特里夏·丘奇兰(Patricia Churchland)用一个物理类比来说明这一点。假设有人接受 (i) 电和磁是力并且 (ii) 的基本特性亮度。然后他们可能会争辩 (iii) 不能 足以构成亮度,或者不能构成亮度。他们可以通过以下思想实验“发光室”)来支持这一点。 想象一下,有人黑暗的房间里挥舞磁铁这将产生电磁波,但是,无论她以多快的速度移动磁铁,房间都会保持黑暗。由此得出的结论是,光不可能是电磁辐射。


但是光就是电磁辐射,那么哪里出了问题呢? 丘奇地 错误 在于 第三个前提:不能不足以或不能构成亮度。这个前提是错误的,发光室的思想实验无法确定真实性。同样,他们声称 Searle 论证的错误在于它的第三个前提,即句法对语义不充分 诉诸中国房间无法证明真实性。对于丘奇兰来说,句法是否足以用于语义是一个实证的科学问题,而不是一个可以基于像中国房间这样的想象力思维实验来解决的问题:

Goethe found it inconceivable that small particles by themselves could constitute or be sufficient for the objective phenomenon of light. Even in this century, there have been people who found it beyond imagining that inanimate matter by itself, and however organised, could ever constitute or be sufficient for life. Plainly, what people can or cannot imagine often has nothing to do with what is or is not the case, even where the people involved are highly intelligent.33 ...

This is a version of the objection that Searle is hamstrung by the limits of what he can imagine. In response, Searle has denied that it is, or could be, an empirical question whether syntax is sufficient for semantics so the Luminous room is not a good analogy. To understand this response, we need to know a little bit more about ...

the notions of syntax and semantics, and how they might apply to the mind. This will be one of the aims of Chapter 4. ...

Conclusion: can a computer think? ...

So what should we make of AI and the idea of thinking comput- ers? In 1965, one of the pioneers of AI, Herbert Simon, predicted that ‘machines will be capable, within twenty years, of doing any work that a man can do’.34 Almost forty years later, there still seems no chance that this prediction will be fulfilled. Is this a problem- in-principle for AI, or is it just a matter of more time and more money? ...

Dreyfus and Searle think that it is a problem-in-principle. The upshot of Dreyfus’s argument was, at the very least, this: if a computer is going to have general intelligence – i.e. be capable of reasoning about any kind of subject matter then it has to have common-sense knowledge. The issue now for AI is whether com- mon-sense knowledge could be represented in terms of rules and representations. So far, all attempts to do this have failed.35 ...

The lesson of Searle’s argument, it seems to me, is rather differ- ent. Searle’s argument itself begs the question against AI by (in ef- fect) just denying its central thesis – that thinking is formal symbol manipulation. But Searle’s assumption, nonetheless, seems to me to be correct. I argued that the proper response to Searle’s argument is: sure, Searle-in-the-room, or the room alone, cannot understand Chinese. But, if you let the outside world have some impact on the room, meaning or ‘semantics’ might begin to get a foothold. But, of course, this response concedes that thinking cannot be simply sym- bol manipulation. Nothing can think simply by being a computer. ...

However, this does not mean that the idea of computation cannot apply in any way to the mind. For it could be true that nothing can think simply by being a computer, and also true that the way we think is partly by computing. This idea will be discussed in the next chapter. ...


延伸阅读


人工智能的一个非常好的(虽然是技术性的)介绍

S.J. Russell and P. Norvig’s Artificial Intelligence: a Modern Approach (Englewood Cliffs, NJ: Prentice Hall 1995). The two best philosophical books on the topic of this chapter are John Haugeland’s Artificial Intelligence: the Very Idea (Cambridge, Mass.: MIT Press 1985) and Jack Copeland’s Artificial Intelligence: a Philosophical Introduction (Oxford: Blackwell 1993). There are a number of good general books which introduce the central concepts of computing in a clear non-technical way. One of the best is Joseph Weizenbaum’s Computer Power and Human Reason (Harmondsworth: Penguin 1984), Chapters 2 and 3. Chapter 2 of Roger Penrose’s The Emperor’s New Mind (Oxford: Oxford University Press 1989) gives a very clear exposition of the ideas of an algorithm and a Turing machine, with useful examples. A straightforward introduction to the logical and mathematical basis of computation is given by Clark Glymour, in Thinking Things Through (Cambridge, Mass.: MIT Press 1992), Chapters 12 and 13. Hubert Dreyfus’s book has been reprinted, with a new introduction, as What Computers Still Can’t Do (Cambridge, Mass.: MIT Press 1992). Searle’s famous critique of AI can be found in his book Minds, Brains and Science (Harmondsworth: Penguin 1984), and also in an article which preceded the book, ‘Minds, brains and programs’, which is reprinted in Margaret Boden’s useful anthology The Philosophy of Artificial Intelligence (Oxford: Oxford University Press 1990). This also contains Turing’s famous paper ‘Computing machinery and intelligence’ and an important paper by Dennett on the frame problem. Searle’s article, along with some interesting articles by some of the founders of AI, is also reprinted in John Haugeland’s anthol- ogy Mind Design (Cambridge, Mass.: MIT Press 1981; 2nd edn, substantially revised, 1997), which includes a fine introduction by Haugeland. ...

4 ...

The mechanisms of thought ...

The central idea of the mechanical view of the mind is that the mind is a part of nature, something which has a regular, law-governed causal structure. It is another thing to say that the causal structure of the mind is also a computational structure that thinking is com- puting. However, many believers in the mechanical mind believe in the computational mind too. In fact, the association between thinking and computation is as old as the mechanical world picture itself: ...

When a man reasoneth, hee does nothing else but conceive a summe totall, from Addition of parcels; or conceive a remainder, from Substraction of one summe from another: which (if it be done by Words) is conceiving of the consequence of the names of all the parts, to the name of the whole; or from the names of the whole and one part, to the name of the other part . . . Out of which we may define (that is to say determine,) what that is, which is meant by this word Reason, when we reckon it amongst the Faculties of the mind. For REASON, in this sense, is nothing but Reckoning (that is, Adding and Substracting) of the Consequences of generall names agreed upon, for the marking and signifying of our thoughts; I say marking them, when we reckon by ourselves; and signifying, when we demonstrate, or ap- prove our reckonings to other men. ...1

This is an excerpt from Thomas Hobbes’s Leviathan (1651). Hobbes’s idea that reasoning is ‘reckoning’ (i.e. calculation) has struck some writers as a prefiguration of the computational view of thought.2 The aim of this chapter is to consider this computational view. ...

As I emphasised in Chapter 3, the computational view of thought is distinct from the claim that something can think simply by be- ing a computer of a certain sort. Even if we denied that anything could think just by computing, we could hold that our thoughts ...

130

The mechanisms of thought

have a computational basis. That is, we could think that some of our mental states and processes are, in some way, computational, without thinking that the idea of computation exhausts the nature of thought. ...

The idea that some mental states and processes are computational is one that is dominant in current philosophy of mind and in cogni- tive psychology, and, for this reason at least, it is an idea worth exploring in detail. But, before discussing these theories, we need to know which mental phenomena could plausibly be considered computational. Only then shall we know of which phenomena these theories could be true. ...

Cognition, computation and functionalism ...

I have spoken about the idea that the mind is a computer; but we now need to be a bit more precise. In our discussion of mental phe- nomena in Chapter 1 (‘Brentano’s thesis’, see p. 36) we uncovered a dispute about whether all mental states are representational (or exhibit intentionality). Some philosophers think that some mental states – such as bodily sensations, for example – have non-repre- sentational properties, known as ‘qualia’. From this viewpoint, then, not all mental states are representational. If this view is right it will not be possible for the whole mind to be a computer, because computation is defined in terms of representation – remember that a computer is a device which processes representations in a systematic way. So only those mental states which are purely representational could be candidates for being computational states. The alternative view (known as ‘representationalism’ or ‘intentionalism’) says that all mental states, in all their aspects, are representational in nature. Based on this view, there is no obstacle in principle to all mental states being computational in nature. ...

I will not adjudicate this dispute here, but will return to it briefly in Chapter 6.3 My strategy in this chapter will be to make the best case for the computational theory of the mind, i.e. to consider the strongest examples of mental states and processes that have the most plausible claim to be computational in nature, and the ...

132

arguments that there are such computational states and processes. We can then see how far these arguments apply to all other mental states. In one way, this is just good philosophical method: one should always assess a theory in its most plausible version. No-one is interested in a critique of a caricature. But, in this case, the argu- ment for the computational nature of representational mental states is of independent interest, whatever one thinks of the view that says that all mental states are computational. So, for the time being, we will ignore the question of whether there can be a computational theory of pain. ...4

A brief digression is now needed on a matter of philosophical history. Those readers who are familiar with the functionalist phi- losophy of mind of the 1960s may find this confusing. For wasn’t the aim of this theory to show that mental states could be classified by their Turing machine tables, and wasn’t pain the paradigm ex- ample used (input = tissue damage; output = moaning/complaining behaviour)? These philosophers may have been wrong about the mind being a Turing machine, but surely they cannot have been as confused as I am saying that they were? However, I’m not saying they were confused. As I see it, the idea that mental states have machine tables was a reaction against the materialist theory that tied mental states too closely to particular kinds of brain states (‘Pain = C-fibre firing’ etc.). So a Turing machine table was one way of giving a relatively abstract specification of mental state types that did not pin them down to particular neural structures. Many kinds of different physical entity could be in the same mental state the point of the machine table analogy was to show how this could be.5 But, as we saw in Chapter 3 – ‘Instantiating a func- tion and computing a function’ (p. 102) we need to distinguish between the idea that a transition between states can be described by a Turing machine table and the idea that a transition between states actually involves computation. To distinguish between these ideas, we needed to appeal to the idea of representation: computers process representations, while (for example) the solar system does not. It follows that we must distinguish between the functionalist theory of mind which says that the mind is defined by its causal ...

structure – and the computational theory of mind – which says that this causal structure is computational, i.e. a disciplined series of transitions among representations. This distinction is easy to see, of course, because not all causal structures are computations. ...

Let’s return to the question of scope of the computational theory of mind. I said that it is controversial whether pains are purely representational, and therefore equally controversial whether there can be a purely computational theory of pains. So which mental states and processes could be more plausible examples of computa- tional states and processes? The answer is now obvious: those states which are essentially purely representational in nature. In Chapter 1, I claimed that beliefs and desires (the propositional attitudes) are like that. Their essence is to represent the world, and, although they often appear in consciousness, it is not essential to them that they are conscious. There is no reason to think, at least from the perspec- tive of common-sense psychology, that they have any properties other than their representational ones. A belief’s nature is exhausted by how it represents the world to be, and the properties it has as a consequence of that. So beliefs look like the best candidates, if there are any, to be computational states of mind. ...

The main claim of what is sometimes called the computational theory of cognition is that these representational states are related to one another in a computational way. That is, they are related to each other in something like the way that the representational states of a computer are: they are processed by means of algorithmic (and perhaps heuristic) rules. The term ‘cognition’ indicates that the concern of the theory is with cognitive processes, such as reasoning and inference, processes that link cognitive states such as belief. The computational theory of cognition is, therefore, the philosophi- cal basis of cognitive science (see Chapter 3, ‘Thinking computers’, p. 109, for the idea of cognitive science). ...

Another term for this theory is the representational theory of mind. This term is less appropriate than ‘the computational theory of cognition’, for at least two reasons. The first is that it purports to describe the whole mind, which, as we have seen, is problematic. The second is that the idea that states of mind represent the world ...

is, in itself, a very innocuous idea: almost all theories of the mind can accept that the mind ‘represents’ the world in some sense. What not all theories will accept is that the mind contains representations. Jean-Paul Sartre, for instance, said that ‘representations . . . are idols invented by the psychologists’.6 A theory of the mind could ac- cept the simple truism that the mind ‘represents the world’ without holding that the mind ‘contains representations’. ...

What does it mean to say that the mind ‘contains’ representa- tions? In outline it means this: in thinkers’ minds there are distinct states which stand for things in the world. For example, I am pres- ently thinking about my imminent trip to Budapest. According to the computational theory of the mind, there is in me – in my head a state which represents my visit to Budapest. (Similarly: there is, on the hard disk of my computer, a file – a complex state of the computer – which represents this chapter.) ...

This might remind you of the controversial theory of ideas as ‘pictures in the head’ which we dismissed in Chapter 1. But the computational theory is not committed to pictures in the head: there are many kinds of representation other than pictures. This raises the question: what does the computational theory of cognition say that these mental representations are? ...

There are a number of answers to this question; the rest of the chapter will sketch the most influential answers. I shall begin with the view that has provoked the most debate for the last twenty years: the idea that mental representations are, quite literally, words and sentences in a language: the ‘language of thought’. ...

The language of thought ...

We often express our thoughts in words, and we often think in words, silently, to ourselves. Though it is implausible to say that all thought is impossible without language, it is undeniable that the languages we speak give us the ability formulate extremely complex thoughts. (It is hard to imagine how someone could think about, say, postmodernism without being able to speak a language.) But this is not what people mean when they say that we think in a language of thought. ...

What they mean is that when you have a thought – say a belief that the price of property is rising again – there is (literally) writ- ten in your head a sentence which means the same as the English sentence ‘The price of property is rising again’. This sentence in your head is not itself (normally) considered to be an English sentence, or a sentence of any public language. It is rather a sentence of a postulated mental language: the language of thought, sometimes abbreviated to LOT, and sometimes called Mentalese. The idea is that it is a plausible scientific or empirical hypothesis to suppose that there is such a mental language, and that cognitive science should work on this assumption and attempt to discover Mentalese. ...

Those encountering this theory for the first time may well find it very bizarre: why should anyone want to believe it? But, before answering this, there is a prior question: what exactly does the Mentalese hypothesis mean? ...


我们可以这个问题分为另外两个问题:


一个符号,任何符号,在某人的脑海是什么意思


句话某人脑海是什么意思

We can address these questions by returning to the nature of sym- bols in general. Perhaps, when we first think about words and other symbols (e.g. pictures), we think of them as visually detectable: we see words on the page, traffic signs and so on. But, of course, in the case of words, it is equally common to hear sentences when we hear other people speaking. And many of us are familiar with other ways of storing and transmitting sentences: through radio waves, patterns on magnetic tape, and in the magnetic disks and electronic circuitry of a computer. ...

There are many ways, then, in which symbols can be stored and transmitted. Indeed, there are many ways in which the very same symbols can be stored, transmitted or (as I shall say) realised. The English sentence, ‘The man who broke the bank at Monte Carlo died in misery’ can be written, spoken, or stored on magnetic tape or a computer disk. But, in some sense, it is still the same sentence. We ...

can make things absolutely precise here if we distinguish between types and tokens of words and sentences. In the list of words ‘Est! Est! Est!’ the same type of word appears three times: there are, as philosophers and linguists say, three tokens of the same type. In our example of a sentence, the same sentence-type has many physical tokens, and the tokens can realised in very different ways. ...

I shall call these different ways of storing different tokens of the same type of sentence the different media in which they are realised. Written English words are one medium, spoken English words are another and words on magnetic tape yet another. The same sentence can be realised in many different media. However, for the discussion that follows, we need another distinction. We need to distinguish between not just the different media in which the same symbols can be stored, but also the different ways in which the same message or the same content can be stored. ...

Consider a road sign with a schematic picture in a red triangle of two children holding hands. The message this sign conveys is: ‘Beware! Children crossing!’. Compare this with a verbal sign that says in English: ‘Beware! Children crossing!’. These two signs ex- press the same message, but in very different ways. This difference is not captured by the idea of a medium, as that term was meant to express the difference between the different ways in which the same (for example) English sentence can be realised by different physical materials. But, in the case of the road sign, we don’t have a sentence at all. ...

I’ll call this sort of difference in the way a message can be stored a difference in the vehicle of representation. The same message can be stored in different vehicles, and these vehicles can be ‘realised’ in different media. The most obvious distinction between vehicles of representation is that which can be made between sentences and pictures, though there are other kinds. For example, some philoso- phers have claimed that there is a kind of natural representation, which they call ‘indication’. This is the kind of representation in which the rings of a tree, for example, represent or indicate the tree’s age.7 This is clearly neither linguistic nor pictorial representation: a different kind of vehicle is involved. (See Chapter 5, Causal theories ...


心理表征“, 175 页。我们将在下面的“Brainy computers”部分遇到另一种载体(第 159 页)。

Now we have the distinction between the medium and vehicle of representation, we can begin to formulate the Mentalese hypothesis. The hypothesis says that sentences are written in the head. This means that, whenever someone believes, say, that prices are rising, the vehicle of this thought is a sentence. And the medium in which this sentence is realised is the neural structure of the brain. The rough idea behind this second statement is this: think of the brain as a computer, with its neurons and synapses making up its ‘primitive processors’. To make this vivid, think of neurons, the constituent cells of the brain, as rather like the logic gates of Chapter 3: they emit an output signal (‘fire’) when their inputs are of the appropriate kind. Then we can suppose that combinations of these primitive processors (in some way) make up the sentence of Mentalese whose translation into English is ‘Prices are rising’. ...

So much for the first question. The second question was: suppose there are representations in the head; what does it mean to think of these representations as sentences? That is, why should there be a language of thought, rather than some other system of representa- tion (e.g. pictures in the head)? ...

Syntax and semantics ...

To say that a system of representation is a language is to say that its elements (sentences and words) have a syntactic and semantic struc- ture. We met the terms ‘syntax’ and ‘semantics’ in our discussion of Searle’s Chinese room argument, and it is now time to say more about them. (You should be aware that what follows is only a sketch, and, like so many terms in this area, ‘syntax’ and ‘semantics’ are quite controversial terms, used in subtly different ways by different authors. Here I only mean to capture the uncontroversial outlines.) ...

Essentially, syntactic features of words and sentences in a lan- guage are those that relate to their form rather than their meaning. A theory of syntax for a language will tell us what the basic kinds of expression are in the language, and which combinations of ...

expressions are legitimate in the language that is, which combina- tions of expressions are grammatical or ‘well formed’. For example, it is a syntactic feature of the complex expression ‘the Pope’ that it is a noun phrase, and that it can only legitimately occur in sentences in certain positions: ‘The Pope leads a jolly life’ is grammatical, but ‘Life leads a jolly the Pope’ is not. The task of a syntactic theory is to say what the fundamental syntactic categories are, and which rules govern the production of grammatically complex expressions from combinations of the simple expressions. ...

In what sense can symbols in the head have syntax? Well, certain symbols will be classified as simple symbols, and rules will operate on these symbols to produce complex symbols. The task facing the Mentalese theorist is to find these simple symbols, and the rules which operate on them. This idea is not obviously absurd once we’ve accepted the idea of symbols in the head at all so let’s leave syntax for the moment and move on to semantics. ...

Semantic features of words and sentences are those that relate to their meaning. While it is a syntactic feature of the word ‘pusil- lanimous’ that it is an adjective, and so can only appear in certain places in sentences, it is a semantic feature of ‘pusillanimous’ that it means . . . pusillanimous – that is to say, spineless, weak-willed, a pushover. A theory of meaning for a language is called a ‘semantic theory’, and ‘semantics’ is that part of linguistics which deals with the systematic study of meaning. ...


实际上,正是因为符号具有语义特征,所以它们根本就是符号正是在符号中,它们代表或代表事物; 代表 代表是语义关系。但语义学不仅仅是关于词语与世界的关系,关于词语彼此之间的关系 'Cleopatra loves Anthony' 这样的句子有三个 组成部分,'Cleopatra'、'loves' 'Anthony', 所有这些都可以出现在 其他句子中,比如 'Cleopatra died suicide', 'Desdemona loves Cassio' 以及“安东尼逃避了他的职责”。 为了方便起见,忽略 喻、习语、歧义以及多个人可以共享一个名字的事实所带来的复杂性——不是微不足道的遗漏,而是我们在这个阶段可以做出遗漏——通常是


认识到这些出现在 这些其他句子中时,它们与它们在原句中出现时的含义相同。

This fact, though it might appear trivial and obvious at first, is actually very important. The meaning of sentences is determined by the meanings of their parts and their mode of combination, i.e. their syntax. So the meaning of the sentence ‘Cleopatra loves Anthony’ is entirely determined by the meanings of the constituents ‘Cleopatra’, ‘loves’ and ‘Anthony’, the order in which they occur and by the syn- tactic role of these words (the fact that the first and third words are nouns and the second is a verb). This means that, when we under- stand the meaning of a word, we can understand its contribution to any other sentence in which it occurs. And many people think that it is this fact that explains how it is that we are able to understand sentences that we have not previously encountered. For example, I doubt whether you have ever encountered this sentence before: ...

There are fourteen rooms in the bridge. ...

However odd the sentence may seem, you certainly know what it means, because you know what the constituent words mean and what their syntactic place in the sentence is. (For example, you are able to answer the following questions about the sentence: ‘What is in the bridge?’, ‘Where are the rooms?’, ‘How many rooms are there?’.) This fact about languages is called ‘semantic compositional- ity’. According to many philosophers and linguists, it is this feature of languages which enables us to learn them at all. ...8

To grasp this point, it may help to contrast a language with a representational system which is not compositional in this way: the system of coloured and patterned flags used by ships. Suppose there is one flag which means ‘yellow fever on board’, another which means ‘customs inspectors welcome’. But, given only these resourc- es, you cannot combine your knowledge of the meanings of these symbols to produce a another symbol, e.g. one that says ‘yellow fever inspectors welcome’. What is more, when you encounter a flag you have never seen before, no amount of knowledge of the other flags can help you understand it. You have to learn the meaning of ...

each flag individually. The difference with a language is that, even though you may learn the meanings of individual words one by one, this understanding gives you the ability to form and understand any number of new sentences. In fact, the number of sentences in a language is potentially infinite. But, for the reasons given, it is plain that if a language is to be learnable the number of basic significant elements (words) has to be finite. Otherwise, encountering a new sentence would always be like encountering a new flag on the ship – which it plainly isn’t. ...

In what sense can symbols in the head have semantic features? The answer should now be fairly obvious. They can have semantic features because they represent or stand for things in the world. If there are sentences in the head, then these sentences will have semantically significant parts (words) and these parts will refer to or apply to things in the world. What is more, the meanings of the sentences will be determined by the meanings of their parts plus their mode of combination. For the sake of simple exposition, let’s make the chauvinistic assumption that Mentalese is English. Then, to say that I believe that prices are rising is to say that there is a sentence written in my head, ‘Prices are rising’, whose meaning is determined by the meanings of the constituent words, ‘prices’, ‘are’ and ‘rising’ and by their mode of combination. ...

The argument for the language of thought ...

So, now that we have an elementary grasp of the ideas of syntax and semantics, we can say precisely what the Mentalese hypothesis is. The hypothesis is that when a thinker has a belief or desire with the content P, there is a sentence (i.e. a representation with semantic and syntactic structure) that means P written in their heads. The vehicles of representation are linguistic, while the medium of repre- sentation is the neural structure of the brain. ...

The attentive reader will have noticed that there is something missing from this description. For, as we saw in Chapter 1, differ- ent thoughts can have the same content: I can believe that prices will fall, I can desire that prices will fall, I can hope that prices will ...

fall, and so on. The Mentalese hypothesis says that these states all involve having a sentence with the meaning prices will fall written in the heads of the thinkers. But surely believing that prices will fall is a very different kind of mental state from hoping that prices will fall – how does the Mentalese hypothesis explain this difference? ...

The short answer is: it doesn’t. A longer answer is that it is not the aim of the Mentalese hypothesis to explain the difference between belief and desire, or between belief and hope. What it aims to ex- plain is not the difference between believing something and desiring it, but between believing (or desiring) one thing and something else. In the terminology of attitudes and contents, introduced in Chapter 1, the aim is to explain what it is to have an attitude with a certain content, not what it is to have this attitude rather than that one. Of course, believers in Mentalese do think that there will be a scientific theory of what it is to have a belief rather than a desire, but this theory will be independent of the Mentalese hypothesis itself. ...

We can now return to our original question: why should we believe that the vehicle of mental representation is a language? The inventor of the Mentalese hypothesis, Jerry Fodor, has advanced two influential arguments to answer this question, which I will briefly outline. The second will take a bit more exposition than the first. ...

The first argument relies on a comparison between the ‘com- positionality’ of semantics, discussed in the previous section, and an apparently similar phenomenon in thought itself. Remember that if someone understands the English sentence ‘Cleopatra loves Anthony’, they are ipso facto in a position to understand other sentences containing those words, provided that they understand the other words in those sentences. At the very least, they can understand the sentence, ‘Anthony loves Cleopatra’. Similarly, Fodor claims, if someone is able to think Cleopatra loves Anthony, then they are also able to think Anthony loves Cleopatra. Whatever it takes to think the first thought, nothing more is needed to be able to think the second. Of course, they may not believe that Anthony loves Cleopatra merely because they believe that Cleopatra loves Anthony; but they can at least consider the idea that Anthony loves Cleopatra. ...

Fodor claims that the best explanation of this phenomenon is that thought itself has a compositional structure, and that having a compositional structure amounts to having a language of thought. Notice that he is not saying that the phenomenon logically entails that thought has a compositional syntax and semantics. It is possi- ble that thought could exhibit the phenomenon without there being a language of thought but Fodor and his followers believe that the language of thought hypothesis is the best scientific explanation of this aspect of thought. ...

Fodor’s second argument relies on certain assumptions about mental processes or trains of thought. This argument will help us see in what sense exactly the Mentalese hypothesis is a computational theory of cognition or thought. To get a grip on this argument, con- sider the difference between the following two thought-processes: ...

Suppose I want to go to Ljubljana, and I can get there by train or by bus. The bus is cheaper, but the train will be more pleas- ant, and leaves at a more convenient time. However, the train takes longer, because the bus route is more direct. But the train involves a stop in Vienna, which I would like to visit. I weigh up the factors on each side, and I decide to sacrifice time and money for the more salubrious environment of the train and the attractions of a visit to Vienna. ...

Suppose I want to go to Ljubljana, and I can get there by train or by bus. I wake up in the morning and look out the window. I see two pigeons on the rooftop opposite. Pigeons always make me think of Venice, which I once visited on a train. So I decide to go by train. ...

My conclusion is the same in each case but the methods are very different. In the first case, I use the information I have, weighing up the relative desirability of the different outcomes. In short, I reason: I make a reasoned decision from the information available. In the second case, I simply associate ideas. There is no particularly rational connection between pigeons, Venice and trains – the ideas just ‘come into my mind’. Fodor argues that, in order for ...

common-sense psychological explanations (of the sort we examined in Chapter 2) to work, much more of our thinking must be like that in the first case than that in the second. In Chapter 2, I defended the idea that, if we are to make sense of people’s behaviour, we must see them as pursuing goals by reasoning, drawing sensible conclusions from what they believe and want. If all thinking was of the ‘free as- sociation’ style, it would be very hard to do this: from the outside, it would be very hard to see the connection between people’s thoughts and their behaviour. The fact that it is not very hard strongly sug- gests that most thinking is not free associating. ...

Fodor is not denying that free associating goes on. But what he is aiming to emphasise is the systematic, rational nature of many mental processes.9 One way in which thinking can be systematic is in the above example 1, when I am reasoning about what to do. Another is when reasoning about what to think. To take a simple ex- ample: I believe that the Irish philosopher Bishop Berkeley thought that matter is a contradictory notion. I also believe that nothing contradictory can exist, and I believe that Bishop Berkeley believed that too. I conclude that Bishop Berkeley thought that matter does not exist and that if matter does exist then he is wrong. Because I believe that matter does exist, I conclude that Bishop Berkeley was wrong. This is an example of reasoning about what to think. ...

Inferences like this are the subject matter of logic. Logic studies those features of inference that do not depend on the specific con- tents of the inferences that is, logic studies the form of inferences. For example, from the point of view of logic, the following simple inferences can be seen as having the same form or structure: ...

If I will visit Ljubljana, I will go by train. I will visit Ljubljana. ...

Therefore: I will go by train. ...


If matter exists, Bishop Berkeley was wrong. ...

Matter exists. ...

Therefore: Bishop Berkeley was wrong. ...

What logicians do is represent the form of inferences like these, regardless of what any particular instance of them might mean, that is to say regardless of their specific content. For example: using the letters P and Q to represent the constituent sentences above, and the arrow ‘’ to represent ‘if . . . then . . . ’, we can represent the form of the above inferences as follows: ...

P Q P ...

Therefore: Q ...

Logicians call this particular form of inference modus ponens. Arguments with this form hold good precisely because they have this form. What does ‘holds good’ mean? Not that its premises and conclusions will always be true: logic alone cannot give you truths about the nature of the world. Rather, the sense in which it holds good is that it is truth-preserving: if you start off with truths in your premises, you will preserve truth in your conclusion. A form of ar- gument that preserves truth is what logicians call a valid argument: if your premises are true, then your conclusions must be true. ...

Defenders of the Mentalese hypothesis think that many transi- tions among mental states many mental processes, or trains of thought, or inferences are like this: they are truth-preserving because of their form. When people reason logically from premises to conclusions, the conclusions they come up with will be true if the premises they started with are true, and they use a truth-pre- serving method or rule. So, if this is true, the items which mental processes process had better have form. And this, of course, is what the Mentalese hypothesis claims: the sentences in our head have a syntactic form, and it is because they have this syntactic form that they can interact in systematic mental processes. ...

To understand this idea, we need to understand the link between three concepts: semantics, syntax/form and causation. The link can ...

be spelled out by using the comparison with computers. Symbols in a computer have semantic and ‘formal’ properties, but the proces- sors in the computer are sensitive to only the formal properties. How? Remember the simple example of the ‘and-gate’ (Chapter 3: ‘Thinking computers’, p. 109). The causal properties of the and-gate are those properties to which the machine is causally sensitive: the machine will output an electric current when and only when it takes electric currents from both inputs. But this causal process encodes the formal structure of ‘and’: a sentence ‘P and Q’ will be true when and only when P is true and Q is true. And this formal structure mir- rors the meaning of ‘and’: any word with that formal structure will have the meaning ‘and’ has. So the causal properties of the device mirror its formal properties, and these in turn mirror the semantic properties of ‘and’. This is what enables the computer to perform computations by performing purely causal operations. ...

Likewise with the language of thought. When someone reasons from their belief that P Q (i.e. if P then Q) and their belief that P to the conclusion Q, there is inside them a causal process which mirrors the purely formal relation of modus ponens. So the elements in the causal process must have components which mirror the com- ponent parts of the inference, i.e. form must have a causal basis ....

All we need to do now is make the link between syntax and semantics. The essential point here is much more complicated, but it can be illustrated with the simple form of logical argument discussed above. Modus ponens is valid because of its form: but this purely formal feature of the argument does guarantee some- thing about its semantic properties. What it guarantees is that the semantic property of truth is preserved: if you start your reasoning with truths, and only use an argument of the modus ponens form, then you will be guaranteed to get only truths at the end of your reasoning. So reasoning with this purely formal rule will ensure that your semantic properties will be ‘mirrored’ by the formal properties. Syntax does not create semantics, but it keeps it in tow. As John Haugeland has put it, ‘if you take care of the syntax, the semantics will take care of itself’.10 ...

We now have the link that we wanted between three things: the ...

semantic features of mental representations, their syntactic features and their causal features. Fodor’s claim is that, by thinking of mental processes as computations, we can link these three kinds of feature together: ...

Computers show us how to connect semantical with causal properties for symbols . . . You connect the causal properties of a symbol with its semantic properties via its syntax . . . we can think of it syntactic structure as an abstract feature of its . . . shape. Because, to all intents and purposes, syntax reduces to shape, and because the shape of a symbol is a potential determinant of its causal role, it is fairly easy . . . to imagine symbol tokens interacting causally in virtue of their syn- tactic structures. The syntax of a symbol might determine [its] causes and effects . . . in much the same way that the geometry of a key determines which locks it will open.11 ...

What the hypothesis gives us, then, is a way of connecting the repre- sentational properties of thought (its content) with its causal nature. The link is provided by the idea of a mental syntax that is realised in the causal structure of the brain, rather as the formal properties of a computer’s symbols are realised in the causal structure of the computer. The syntactic or formal properties of the representations in a computer are interpretable as calculations, or inferences, or pieces of reasoning – they are semantically interpretable – and this provides us with a link between causal properties and semantic properties. Similarly, it is hoped, with the link between the content and causation of thought. ...

The Mentalese hypothesis is a computational hypothesis because it invokes representations which are manipulated or processed ac- cording to formal rules. It doesn’t say what these rules are: this is a matter for cognitive science to discover. I used the example of a simple logical rule, for simplicity of exposition, but it is no part of the Mentalese hypothesis that the only rules that will be discovered will be the laws of logic. ...

What might these other rules be? Defenders of the hypothesis often appeal to computational theories of vision as an illustration of the sort of explanation that they have in mind. The computational ...

theory of vision sees the task for the psychology of vision as that of explaining how our visual system produces a representation of the 3D visual environment from the distribution of light on the retina. The theory claims that the visual system does this by creating a representation of the pattern of light on the retina and making com- putational inferences in various stages, to arrive finally at the 3D representation. In order to do this, the system has to have built into it the ‘knowledge’ of certain rules or principles, to make the infer- ence from one stage to the next. (In this short book I cannot give a detailed description of this sort of theory, but there are many good introductions available: see the Further reading section, p. 167.) ...

Of course, we cannot state these principles ourselves without knowledge of the theory. The principles are not accessible to introspection. But, according to the theory, we do ‘know’ these principles in the sense that they are represented somehow in our minds, whether or not we can access them by means of introspec- tion. This idea originates in Noam Chomsky’s linguistic theory.12 Chomsky has argued for many years that the best way to explain our linguistic performance is to postulate that we have knowledge of the fundamental grammatical rules of our language. But the fact that we have this knowledge does not imply that we can bring it into our conscious minds. The Mentalese hypothesis proposes that this is how things are with the rules governing thought-processes. As I mentioned in Chapter 2, defenders of this sort of knowledge sometimes call it ‘tacit knowledge’.13 ...

Notice, finally, that the Mentalese hypothesis is not committed to the idea that all of mental life involves processing linguistic representations. It is consistent with the hypothesis to hold, for ex- ample, that sensations are not wholly representational. But it is also consistent with the hypothesis to hold that there could be processes that ‘manipulate’ non-linguistic representations. One particularly active area of research in cognitive science, for example, is the study of mental imagery. If I ask you the question ‘Do frogs have lips?’ there is a good chance that you will consider this question by forming a mental image and mentally ‘inspecting’ it. According to some cognitive scientists, there is a sense in which there actually are ...

representations in your head which have a pictorial structure, which can be ‘rotated’, ‘scanned’ and ‘inspected’. Perhaps there are pictures in the head after all! So a cognitive scientist could consistently hold that there are such pictorial representations while still maintaining that the vehicles of reasoning are linguistic. (For suggestions on how to pursue this fascinating topic, see the Further reading section, p. 167.) ...

The modularity of mind ...

The argument for the Mentalese hypothesis, as I have presented it, is an example of what is called an inference to the best explanation. A certain undeniable or obvious fact is pointed out, and then it is shown that this obvious fact would make sense, given the truth of our hypothesis. Given that there is no better rival hypothesis, this gives us a reason to believe our hypothesis. This is the general shape of an inference to the best explanation, and it is a central and valu- able method of explanation that is used in science.14 In our case, the obvious fact is the systematic nature of the semantic properties of thought: the general fact that is revealed by phenomena described in the Anthony and Cleopatra example above. Fodor’s argument relies on the fact that mental processes exploit this systematicity in the rational transitions from thought to thought. Trains of thought have a rational structure, and they have causal outcomes which are dependent on this rational structure. The best explanation of this, Fodor claims, is that there is an inner medium of representation – Mentalese, the language of thought (LOT) with the semantic and syntactic properties described above. ...

But in many areas of the mind, though there is good reason to suppose that there is mental representation, there does not seem to be anything like a fully rational process going on. What should a defender of Mentalese say about this? Take the case of visual per- ception, for example. As we saw in the previous section, psycholo- gists who study vision tend to treat the visual system as processing representations from the representation of the distribution of light reflected onto the retina, to the eventual construction of a repre- sentation of the objective scene around the perceiver. But there is a ...

sense in which visual perception is not a rational process in the way in which thought is, and this would remove the immediate motiva- tion for postulating a language of thought for visual perception. This point is a way of introducing a further important proposal of Fodor’s about the structure of the mind: the proposal that the mind is modular ....

We are all familiar with the phenomenon of a visual illusion, where something visually seems to be the way it is not. Consider the Mach bands (named after the great physicist Ernst Mach, who discovered the illusion) depicted in Figure 4.1. On first seeing these, your initial reaction will be that each stripe is not uniformly grey, but that the shade becomes slightly lighter on the side of the stripe nearer the darker stripe. This is the way it looks. But on closer inspection you can see that each stripe is actually uniformly grey. Isolate one of the stripes between two pieces of paper, and this becomes obvious. So, now you know, and therefore believe, that each stripe is uniformly coloured grey. But it still looks as if they are not, despite what you know! For our present purposes, what is interesting is not so much that your visual system is deceived by ...

Figure 4.1 Mach bands. The stripes are actually of a uniform shade of grey, but they seem lighter at the edges that are closer to the darker stripes. ...

this illusion, but that the illusion persists even when you know that it is an illusion. ...

One thing this clearly shows is that perceiving is not the same as judging or believing. For, if perceiving were just a form of believing, then your current psychological state would be a conflict between believing that the stripes are uniformly coloured and believing that the stripes are not uniformly coloured. This would be a case of explicitly contradictory belief: you believe that something is the case and that it is not the case, simultaneously and consciously. No rational person can live with such explicit contradictions in their beliefs. It is impossible to know what conclusions can be reasonably drawn from the belief that P and not-P; and it is impossible to know how to act on the basis of such a belief. Therefore, the rational person attempts to eliminate explicit contradictions in his or her belief, on pain of irrationality. Faced with a situation where one is inclined to believe one thing and its opposite, one has to make up one’s mind, and go for one or the other. One is obliged, as a rational thinker, to aim to eliminate inconsistency in one’s thought. ...

But, in the case of the Mach bands illusion, there is no question of eliminating the inconsistency. There is nothing one can do to stop the lines looking as if they were unevenly shaded, no matter how hard one tries. If perception were just a form of belief, as some have argued, then this would be a case of irrationality.15 But it plainly isn’t: one has no difficulty, once apprised of the facts, in know- ing what conclusions to draw from this combination of belief and perception, and in knowing how to act on it. One’s rationality is not at all undermined by this illusory experience. Therefore, perception is not belief. ...

What kind of overall picture of the mind is suggested by phe- nomena such as this? Jerry Fodor has argued that they provide evidence for the view that the visual system is a relatively isolated ‘mental module’, an information-processing system which is, in important respects, independent from the ‘central system’ respon- sible for belief and reasoning.16 Fodor holds also that other ‘input systems’ – for example, the systems which process linguistic input – are modular in this way. The thesis that the mind has this overall ...

structure – central system plus modules – is called the thesis of the modularity of mind. This modularity thesis has been very influential in psychology and cognitive science. Many psychologists believe in some version of the thesis, though it is controversial how much of the mind is modular. Here, I will briefly try and give some sense of the nature and scope of the thesis. ...

What exactly is a module? On Fodor’s original introduction of the notion, a module is a functionally defined part of the mind whose most important feature is what he calls informational en- capsulation.17 (‘Functionally defined’ here means defined in terms of what it does, rather than what it is made out of.) A cognitive mechanism is informationally encapsulated when it systematically does not have access to all the information in a thinker’s mind when performing its characteristic operations. An informationally encapsulated computational mechanism may deliver as output the conclusion P, even if somewhere else in the subject’s mind there is the knowledge that not-P: but, what is more, the knowledge that not-P cannot change the output of the computational mechanism. To use a phrase of Zenon Pylyshyn’s, the mechanism’s output is not ‘cognitively penetrable’: it cannot be penetrated by other areas of the cognitive system, specifically by beliefs and knowledge. ...

The point is easy to understand when applied to a concrete ex- ample. No matter how hard you try, you cannot see the stripes in the Mach bands as uniformly shaded grey, even though you know that they are. The knowledge that you have about the way in which they are actually coloured cannot penetrate the output of your visual system. Fodor’s explanation for this is that the visual system (and other ‘input systems’) are informationally encapsulated, and that is the essence of what it is to be a module. Of course, illusions like the Mach bands need detailed explanation in terms of the detailed working of the visual system; Fodor’s point is that this explanation must take place within the context of a modular view of perception, rather than according to a view of perception which treats it as a kind of cognition or belief. ...

Fodor contrasts modules such as the visual system with ‘central systems’ or ‘central mind’. Central mind is the home of the normal ...

propositional attitudes, the states which participate in reasoning and inference, and intellectual and practical problem solving. Where be- lief is concerned, the structure of the belief system allows one to use information in reasoning that comes from any part of one’s stock of beliefs and knowledge. Of course, people are irrational, they have blind spots, and they deceive themselves. But the point is that these shortcomings are personal idiosyncrasies; they are not built into the belief system itself. The situation is different with visual processing and the other modules. ...

As a result of this informational encapsulation, various other properties ‘cluster’ around a module. Modules are domain specific: they use information only from a restricted cognitive domain, i.e. they can’t represent just any proposition about the world, unlike thought. The visual system represents only visually perceptible properties of the environment, for example. Also, modules tend to be mandatory: one can’t help seeing things a certain way, hearing a sentence as grammatical or not, etc. They are innate, not acquired; we are born with them. They may well be hard-wired, i.e. realised in a dedicated part of the brain that, if damaged, cannot be replaced by activity elsewhere in the brain. And they are fast, much faster than processes in central mind. These features all come about as a result of informational encapsulation: ‘what encapsulation buys is speed; and it buys speed at the price of intelligence’.18 Just as he contrasts the modules with central mind, Fodor likes to compare them with reflexes. A reflex, such as the blink reflex, is fast and un- constrained by what one might believe or know this makes perfect sense, given the blink reflex’s function of protecting the eyes. You don’t want to stop to think about whether that wasp is really going to fly into your eye; your eye short-circuits thought. Modules are not reflexes, as they contain states with representational content; but the comparison makes it clear why all (or some of, or most of) the above properties tend to be associated with what Fodor calls modules. (It is worth mentioning that Chomsky has used the term ‘module’ in a different way: for him, a module is just body of innate knowledge. Chomsky’s idea of a module involves no commitment to informational encapsulation.19) ...

Since Fodor proposed this thesis in 1983, there has been an ac- tive debate among psychologists and philosophers about the extent of modularity. How many modules are there? Fodor was originally very cautious: he suggested that each perceptual system is modular, and that there was a module for language processing. But others have been more adventurous: some have argued, for example, that the tacit knowledge of the theory of other minds is an innate mod- ule, on the hypothesis that it can be damaged – and thus damage interpersonal interactions while leaving much of general intel- ligence intact. (It is often claimed that this is the source of autism: autistic children typically have high general intelligence but lack ‘theory of mind’.20) Others go even further and argue that the mind is ‘massively modular’: there is a distinct, more or less encapsulated mechanism for each kind of cognitive task. There might be a module for recognising birds, a module for beliefs about cookery and maybe even a module for philosophy. And so on. ...

If massive modularity is true, then there is no distinction between central mind and modules, simply because there is no such thing as central mind: no such thing as a non-domain-specific, unencapsu- lated, cognitive mechanism. Our mental faculties would be much more fragmented than they seem from the point of view of com- mon-sense psychology. Suppose I have a module for thinking about food (I am not saying anyone has proposed there is such a module, but we can use this as an example to illustrate the thesis). Could it really be true that my reasoning about what to cook for dinner is restricted to information available to this food module alone? Doesn’t it make sense to suppose that it must also be sensitive to information about whether I want to go out later, whether I want to lose weight, whether I want to impress and please my friends and so on? Maybe these could be thought of as pieces of information belonging to the same module; but how, then, do we distinguish one module from another? ...

Furthermore, as Fodor has shown, the thesis is subject to a quite general problem: if there is no general-purpose, non-domain-spe- cific cognitive mechanism, then how does the mind decide, for any given input, which module should deal with that input? The decision ...

procedure for assigning input to modules cannot itself be modular, as it must select from information which is going to be treated by many different modules. It looks as if the massive modularity thesis will end up undermining itself.21 ...

Problems for the language of thought ...

The discussion of modularity was something of a digression. But I hope it has given us a sense of the relationship between the modularity thesis and the computational theory of cognition. Now let’s return to the Mentalese hypothesis. The hypothesis seems to many people both in philosophy and outside to be an outlandish piece of speculation, easily refuted by philosophical argument or by empirical evidence. In fact, it seems to me that matters are not as simple as this, and the hypothesis can defend itself against the strongest of these attacks. I will here discuss two of the most inter- esting criticisms of the Mentalese hypothesis, as they are of general philosophical interest, and they will help us to refine our under- standing of the hypothesis. Despite the power of these arguments, however, I believe that Fodor can defend himself against his critics. ...

Homunculi again? ...

We have talked quite freely about sentences in the head, and their interpretations. In using the comparison with computers, I said that the computer’s electronic states are ‘interpretable’ as calculation, or as the processing of sentences. We have a pretty good idea how these states can have semantic content or meaning: they are designed by computer engineers and programmers in such a way as to be interpretable by their users. The semantic features of a compu- ter’s states are therefore derived from the intentions of the designers and users of the computer.22 ...

Or consider sentences in a natural language like English. As we saw in Chapter 2, there is a deep problem about how sentences get their meaning. But one influential idea is that sentences get their meaning because of the way they are used by speakers in conversa- ...

tion, writing, soliloquy, etc. What exactly this means doesn’t matter here; what matters is the plausible idea that sentences come to mean what they do because of the uses speakers put them to. ...

But what about Mentalese? How do its sentences get to mean something? They clearly do not get their meaning by being con- sciously used by thinkers, otherwise we could know from introspec- tion whether the Mentalese hypothesis was true. But to say that they get their meaning by being used by something else seems to give rise to what is sometimes called the ‘homunculus fallacy’. This argument could be expressed as follows. ...

Suppose we explain the meaning of Mentalese sentences by say- ing that there is a sub-system or homunculus in the brain that uses these sentences. How does the homunculus manage to use these sen- tences? Here, there is a dilemma. On the one hand, if we say that the homunculus uses the sentences by having its own inner language, then we have to explain how the sentences in this language get their meaning: but appealing to another smaller homunculus clearly only raises the same problem again. But, on the other hand, if we say that the homunculus manages to use these sentences without having an inner language, then why can’t we say the same about people? ...

The problem is this. Either the sentences of Mentalese get their meaning in the same way that public language sentences do, or they get their meaning in some other way. If they get their meaning in the same way, then we seem to be stuck with a regress of homunculi. But if they get their meaning in a different way, then we need to say what that way is. Either way, we have no explanation of how Mentalese sentences mean anything. ...

Some writers think that this sort of objection cripples the Mentalese hypothesis.23 But, in a more positive light, it could be seen not as an objection but as a challenge: explain the semantic features of the language of thought, without appealing to the ideas you are trying to explain. There are two possible ways to respond to the challenge. The first would be to accept the homunculus metaphor but deny that homunculi necessarily give rise to a vicious regress. This idea originates from an idea of Daniel Dennett’s (mentioned on p. 107 in ‘Automatic algorithms’, Chapter 3). What we need to ensure ...

is that, when we postulate one homunculus to explain the capacities of another, we do not attribute to it the capacities we are trying to explain. Any homunculus we postulate must be more stupid than the one whose behaviour we are trying to explain, otherwise we have not explained anything.24 ...

However, as Searle has pointed out, if, at the bottom compu- tational level, the homunculus is still manipulating symbols, these symbols must have a meaning, even if they are just 1s and 0s. And, if there is a really stupid homunculus below this level – think of it as one who just moves the tape of a Turing machine from side to side then it is still hard to see how the mere existence of this tape- moving homunculus alone can explain the fact that the 1s and 0s have meaning. The problem of getting from meaningless activity to meaningful activity just seems to a arise again at this lowest level. ...

The second, more popular, approach to the challenge is to say that Mentalese sentences have their meaning in a very different kind of way to the way that public language sentences do. Public language sentences may acquire their meaning by being intention- ally used by speakers, but this cannot be how it is with Mentalese. The sentences of Mentalese, as Fodor has said, have their effects on a thinker’s behaviour ‘without having to be understood’.25 They are not understood because they are not consciously used at all: the conscious use of sentences stops in the outside world. There are no homunculi who use sentences in the way that we do. ...

This does avoid the objection. But now of course, the question is: how do Mentalese sentences get their meaning? This is a hard question, which has been the subject of intense debate. It will be considered in Chapter 5. ...

Following a rule vs. conforming to a rule ...

Searle also endorses the second objection that I shall mention here, which derives from some well-known objections raised by ...

W.V. Quine to Chomsky’s thesis that we have tacit knowledge of grammar.26 Remember that the Mentalese hypothesis says that thinking is rule governed, and even that, in some ‘tacit’ sense, we

know these rules. But how is this claim to be distinguished from the claim that our thinking conforms to a rule, that we merely act and think in accordance with a rule? As we saw in Chapter 3, the planets conform to Kepler’s laws, but do not ‘follow’ or ‘know’ these laws in any literal sense. The objection is that, if the Mentalese hypothesis cannot explain the difference between following a rule and merely conforming to a rule, then much of its substance is lost.

Notice that it will not help to say that the mind contains an ex- plicit representation of the rule (i.e. a sentence stating the rule). For a representation of a rule is just another representation: we would need another rule to connect this rule-representation to the other representations to which it applies. And to say that this ‘higher’ rule must be explicitly represented just raises the same problem again.

The question is not ‘What makes the Mentalese hypothesis com- putational?’. it is computational because sentences of Mentalese are representations that are governed by computational rules. The question is ‘What sense can be given to the idea of “governed by computational rules”?’. I think the defender of Mentalese should respond by explaining what it is for a rule to be implicitly repre- sented in the causal structure of mental processes. To say that rules are implicitly represented is to say that the behaviour of a thinker can be better explained on the assumption that the thinker tacitly knows a rule than on the assumption that he or she does not. What now needs to be explained is the idea of tacit knowledge. But I must leave this to the reader’s further investigations, as there is a further point about rules that needs to be made.27

Some people might be concerned by the use of a logical example in my exposition of the Mentalese hypothesis. For it is plain that human beings do not always reason in accordance with the laws of logic. But, if rules such as modus ponens are supposed to causally govern actual thinking, how can this be? An alternative is to say that the rules of logic do not describe human thinking, but rather prescribe ways in which humans ought to think. (This is sometimes put by saying that the rules of logic are ‘normative’ rather than ‘descriptive’.) One way of putting the difference is to say that, if we were to find many exceptions to physical laws, we would think that

we had got the laws wrong in some way. But if we find a person behaving illogically we do not think that we have got the laws of logic wrong; rather, we label the person irrational or illogical.

This point does not arise just because the example was taken from logic. We could equally well take an example from the theory of practical reasoning. Suppose the rule is ‘act rationally’. When we find someone consistently acting in a way that conflicts with this rule, we might do one of two things: we might reject the rule as a true description of that person’s behaviour or we might keep the rule and say that the person is irrational. The challenge I am considering says we should do the latter.

The Mentalese hypothesis cannot allow that the rules governing thought are normative in this way. So what should it say? I think it should say two things, one defensive and one more aggressive. The defensive claim is that the hypothesis is not at this stage commit- ted to the idea that the normative laws of logic and rationality are the rules which operate on Mentalese sentences. It is a scientific/ empirical question as to which rules govern the mind, and the rules we have mentioned may not be among them. The aggressive claim is that, even if something like these rules did govern the mind, they would be idealisations from the complex, messy actual behaviour of minds. To state the rules properly, we would have to add a clause saying ‘all other things are equal’ (called a ceteris paribus clause). But this does not undermine the scientific nature of Mentalese, because ceteris paribus clauses are used in other scientific theories too.28

These worries about rules are fundamental to the Mentalese hypothesis. The whole crux of the hypothesis is that thinking is the rule-governed manipulation of mental sentences. As one of the main arguments for syntactic structure was the idea that mental processes are systematic, it turns out that the crucial question is: is human thinking rule governed in the sense in which the hypothesis says? Are there laws of thought for cognitive science to discover? Indeed, can the nature of human thought be captured in terms of rules or laws at all?

We have encountered this question before when discussing

Dreyfus’s objections to artificial intelligence. Dreyfus is opposed to the idea of human thinking that inspires orthodox cognitive science and the Mentalese hypothesis: the idea that human thought can be exhaustively captured by a set of rules and representations. In opposition to this, he argues that a practical activity, a network of bodily skills that cannot be reduced to rules, underlies human intel- ligence. In the previous chapter, we looked at a number of ways in which AI could respond to these criticisms. However, some people think it is possible to accept some of Dreyfus’s criticisms without giving up a broadly computational view of the mind.29 This possibil- ity might seem very hard to grasp – the purpose of the next section is to explain it.

‘Brainy’ computers

Think of the things computers are good at. Computers have been built that excel at fast calculation, the efficient storage of informa- tion and its rapid retrieval. Artificial intelligence programs have been designed that can play excellent chess, and can prove theorems in logic. But it is often remarked that, compared with computers, most human beings are not very good at calculating, playing chess, proving theorems or rapid information retrieval of the sort achieved by modern databases (most of us would be hopeless at memorising something like our address books: that’s why we use computers to do this). What is more, the sorts of tasks which come quite naturally to humans – such as recognising faces, perceiving linguistic struc- tures and practical bodily skills have been precisely those tasks which traditional AI and cognitive science have found hardest to simulate and/or explain.

Traditional cognitive science and AI have regarded these prob- lems as challenges, requiring more research time and more finely tuned algorithms and heuristics. But since around the middle of the 1980s, these problems have come to be seen as symptomatic of a more general weakness in the orthodox approach in cognitive science, as another computational approach has begun to gain influence. Many people think that this new approach known

as ‘connectionism’ – represents a serious alternative to traditional accounts like Fodor’s Mentalese hypothesis. Whether this is true is a very controversial question but what does seem to be true is that the existence of connectionism threatens Fodor’s ‘pragmatic’ defence of Mentalese, that it is ‘the only game in town’. (In The Language of Thought, Fodor quotes the famous remark of Lyndon

B. Johnson: ‘I’m the only president you’ve got’.) But the existence of connectionism also challenges the argument for Mentalese outlined above, based on an inference to the best explanation; as, if there are other good explanations in the offing, then Mentalese has to fight harder to show that it is the best.

The issues surrounding connectionism are extremely technical, and it would be beyond the scope of this book to give a detailed account of this debate. So the purpose of this final section is merely to give an impression of these issues, in order to show how there could be a kind of computational theory of the mind that is an al- ternative to the Mentalese hypothesis and its kin. Those who are not interested in this rather more technical issue can skip this section and move straight to the next chapter. Those who want to pursue it further can follow up the suggestions in the Further reading section. I’ll begin by saying what defines ‘orthodox’ approaches, and how connectionist models differ.

The Mentalese hypothesis construes computation in what is now called an orthodox or ‘classical’ way. Machines with a classical computational ‘architecture’ (sometimes called a von Neumann ar- chitecture) standardly involve a distinction between data-structures (essentially, explicit representations of pieces of information) and rules or programs which operate on these structures. Representations in classical architectures have syntactic structure, and the rules ap- ply to the representations in virtue of this structure, as I illustrated above. Also, representations are typically processed in series rather than in parallel all this means is that the program operates on the data in a step-by-step way (as represented, for example, by the program’s flow-chart) as opposed to carrying out lots of opera- tions at the same time. (This sort of computational architecture is sometimes called the ‘rules and representations’ picture; applied to

AI, John Haugeland has labelled it ‘GOFAI’, an acronym for ‘good old-fashioned AI’.30)

Connectionist architecture is very different. A connectionist machine is a network consisting of a large number of units or nodes: simple input–output devices which are capable of being excited or inhibited by electric currents. Each unit is connected to other units (hence ‘connectionism’), and the connections between the units can be of various strengths, or ‘weights’. Whether a unit gives a certain output standardly, an electric current depends on its firing threshold (the minimum input required to turn it on) and the strengths of its connections to other units. That is, a unit is turned on when the strengths of its connections to the other units exceeds its threshold. This in turn will affect the strength of all its connections to other units, and therefore whether those units are turned on.

Units are arranged in ‘layers’ – there is normally an input layer of units, an output layer and one or more layers of ‘hidden’ units, mediating between input and output. (See Figure 4.2 for an idealised diagram.) Computing in connectionist networks involves first fixing

Input units‘Hidden’ unitsOutput units

UnitsConnection between units

Figure 4.2 Diagram of a connectionist network.

the input units in some combination of ‘ons’ and ‘offs’. Because the input units are connected to the other units, fixing their initial state causes a pattern of activation to spread through the network. This pattern of activation is determined by the strengths of the con- nections between the units and the way the input units are fixed. Eventually, the network ‘settles down’ into a stable state the units have brought themselves into equilibrium with the fixed states of the input units – and the output can be read off the layer of output units. One notable feature is that this process happens in parallel – i.e. the changes in the states of the network are taking place across the network all at once, not in a step-by-step way.

For this to be computation, of course, we need to interpret the layers of input and output units as representing something. Just as in a classical machine, representations are assigned to connectionist networks by the people who build them; but the ways in which they are assigned are very different. Connectionist representation can be of two kinds: localist interpretations, in which each unit is assigned a feature that it represents; or distributed interpretations, in which it is the state of the network as a whole that represents. Distributed representation is often claimed to be one of the distinctive features of connectionism the approach itself is often known as parallel distributed processing or PDP. I’ll say a bit more about distributed representation in a moment.

A distinctive feature of connectionist networks is that it seems that they can be ‘trained to learn’. Suppose you wanted to get the machine to produce a certain output in response to input (for exam- ple, there is a network which converts the present tense of English verbs into their past tense forms31). Start by feeding in the input, and let a fairly random pattern of activation spread throughout the machine. Check the output, and see how far it diverges from the desired output. Then repeatedly alter the strengths of the connec- tions between the units until the output unit is the desired one. This kind of trial-and-error method is known as ‘training the network’. The interesting thing is that, once a network has been trained, it can apply the trial and error process itself to new samples, with some success. This is how connectionist systems ‘learn’ things.

Connectionist machines are sometimes called ‘neural networks’, and this name gives a clue to part of their appeal for some cognitive scientists. With their vast number of interconnected (yet simple) units, and the variable strengths of connection between the units, they resemble the structure of the brain much more closely than any classical machine. Connectionists therefore tend to claim that their models are more biologically plausible than those with classical architecture. However, these claims can be exaggerated: there are many properties of neurons that these units do not have.32

Many connectionists also claim that their models are more psychologically plausible, i.e. connectionist networks behave in a way that is closer to the way the human mind works than classical machines do. As I mentioned above, classical computers are very bad at doing lots of the sorts of task that we find so natural – face and pattern recognition, for example. Connectionist enthusiasts often argue that these are precisely the sorts of tasks that their machines can excel at.

I hope this very sketchy picture has given you some idea of the difference between connectionist and classical cognitive science. You may be wondering, though, why connectionist machines are computers at all. Certainly, the idea of a pattern of activation spreading through a network doesn’t look much like the sort of computing we looked at in Chapter 3. Some writers insist on a strict definition of ‘computer’ in terms of symbol manipulation, and rule connectionist machines out on these grounds.33 Others are happy to see connectionist networks as instances of the very general notion of a computer, as something that transforms an input representation into an output representation in a disciplined way.34

In part, this must be an issue about terminology: everyone will agree that there is something in common between what a con- nectionist machine does and what a classical computer does, and everyone will agree that there are differences too. If they disagree about whether to call the similarities ‘computing’ this cannot be a matter of great importance. However, I side with those who say that connectionist machines are computers. After all, connectionist networks process input–output functions in a systematic way, by

using (localised or distributed) representations. And, when they learn, they do so by employing ‘learning algorithms’ or rules. So there’s enough in common to call them both computers – although this may just be a result of the rather general definition I gave of a computer in Chapter 3 .

But this is not the interesting issue. The interesting issue is what the fundamental differences are between connectionist machines and classical machines, and how these differences bear on the theory of mind. Like many issues in this area, there is no general consensus on how this question should be answered. But I will try to outline what I see to be the most important points.

The difference is not just that a connectionist network can be described at the simplest computational level in terms which do not have natural interpretations in common-sense (or scientific) psy- chological language (e.g. as a belief that ‘passed’ is the past tense of ‘pass’). For, in a classical machine, there is a level of processing – the level of ‘bits’ or binary digits of information at which the symbols processed have no natural psychological interpretation.35 As we saw in Chapter 3, a computer works by breaking down the tasks it performs into simpler and simpler tasks: at the simplest level, there is no interpretation of the symbols processed as, say, sentences, or as the contents of beliefs and desires.

But the appeal of classical machines was that these basic opera- tions could be built up in a systematic way to construct complex symbols as it may be, words and sentences in the language of thought – upon which computational processes operate. According to the Mentalese hypothesis, the processes operate on the symbols in virtue of their form or syntax. The hypothesis is that Mentalese sentences are (a) processed ‘formally’ by the machine and (b) repre- sentations: they are interpretable as having meaning. That is: one and the same thing the Mentalese sentence is the vehicle of computation and the vehicle of mental content.

This need not be so with connectionist networks. As Robert Cummins puts it, ‘connectionists do not assume that the objects of computation are the objects of semantic interpretation’.36 That is, computations are performed by the network by the activation (or

inhibition) of units increasing (or decreasing) the strength of the connections between them. ‘Learning’ takes place when the relations between the units are systematically altered in a way that produces an output close to the target. So computation is performed at the level of simple units. But there need be no representation at this simple level: where distributed representation is involved, the states of the network as a whole are what are interpreted as representing. The vehicles of computation – the units – need not be the vehicles of representation, or psychological interpretation. The vehicles of representation can be states of the whole network.

This point can be put in terms of syntax. Suppose, for simplicity, that there is a Mentalese word, ‘dog’, which has the same syntactic and semantic features as the English word ‘dog’. Then the defender of Mentalese will say that, whenever you have a thought about dogs, the same type of syntactic structure occurs in your head. So, if you think ‘some dogs are bigger than others’ and you also think ‘there are too many dogs around here’, the word ‘dogs’ appears both times in your head. Connectionists deny that this need be so: they say that when you have these two thoughts, the mechanisms in your head need have nothing non-semantic in common. As two of the pioneers of connectionism put it, ‘the currency of our systems is not symbols, but excitation and inhibition’.37 In other words: thoughts do not have syntax.

An analogy of Scott Sturgeon’s might help to make this dif- ference between the vehicles of computation and vehicles of representation vivid.38 Imagine a vast rectangular array of electric lights as big as a football pitch. Each individual light can glow on or off to a greater or lesser extent. By changing the illumination of each light, the whole pitch can display patterns which when seen from a distance are English sentences. One pattern might read ‘We know your secret!’, another might read ‘Buy your tickets early to avoid disappointment’. These words are created purely by altering the illumination of the individual lights there is nothing at this level of ‘processing’ which corresponds to the syntax or semantics of the words. The word ‘your’ is displayed by one bank of lights in the first array and by another bank of lights in the second: but at the

level of ‘processing’, these banks of lights need have nothing else in common (they need not even be the same shape: consider YOUR and your). The objects of ‘processing’ (the individual lights) are not the objects of representation (the patterns on the whole pitch).

This analogy might help to give you an impression of how basic processing can produce representation without being ‘sensitive’ to the syntax of symbols. But some might think the analogy is very misleading, because it suggests that the processing at the level of units is closer to the medium of representation, rather than the vehicle (to use the terminology introduced earlier in this chapter). A classical theory will agree that its words and sentences are im- plemented or realised in the structure of the brain; and they can have no objections to the idea that there might be an ‘intermediate’ level of realisation in a connectionist-like structure. But they can still insist that, if cognition is systematic, then its vehicle needs to be systematic too; and, as connectionist networks are not system- atic, they cannot serve as the vehicle of cognition, but only as the medium.

This is, in effect, one of the main lines of criticism pursued by Fodor and Zenon Pylyshyn against connectionism as a theory of mental processing.39 As we saw above, it is central to Fodor’s theory that cognition is systematic: if someone can think Anthony loves Cleopatra then they must be able to at least consider the thought that Cleopatra loves Anthony. Fodor takes this to be a fundamental fact about thought or cognition which any theory has to explain, and he thinks that a language-like mechanism can explain it: for it is built in to the very idea of compositional syntax and semantics. He and Pylyshyn then argue that there is no guarantee that connec- tionist networks will produce systematic representations but, if they do, they will be merely ‘implementing’ a Mentalese-style mecha- nism. In the terminology of this chapter: either the connectionist network will be the mere medium of a representation whose vehicle is linguistic or the network cannot behave with systematicity.

How should connectionists respond to this argument? In broad outline, they could take one of two approaches. They could either argue that cognition is not systematic in Fodor’s sense or they could

argue that while cognition is systematic, connectionist networks can be systematic too. If they take the first approach, they have to do a lot of work to show how cognition can fail to be systematic. If they take the second route, then it will be hard for them to avoid Fodor and Pylyshyn’s charge that their machines will end up merely ‘implementing’ Mentalese mechanisms.

Conclusion: does computation explain representation?

What conclusions should we draw about the debate between con- nectionism and the Mentalese hypothesis? It is important to stress that both theories are highly speculative: they suggest large-scale pictures of how the mechanisms of thought might work, but detailed theories of human reasoning are a long way in the future. Moreover, like the correctness of the computational theory of cognition in gen- eral, the issue cannot ultimately be settled philosophically. It is an empirical or scientific question whether our minds have a classical Mentalese-style architecture, a connectionist architecture or some mixture of the two – or, indeed, whether our minds have any kind of computational structure at all. But now, at least, we have some idea of what would have to be settled in the dispute between the computational theory and its rivals.

Let’s now return to the problem of representation. Where does this discussion of minds and computers leave this problem? In a sense, the problem is untouched by the computational theory of cognition. Because computation has to be defined in term of the idea of representation, the computational theory of cognition takes representation for granted. So, if we still want to explain representa- tion, we need to look elsewhere. This will be the topic of the final chapter.

Further reading

The MIT Encyclopedia of the Cognitive Sciences, edited by Robert A. Wilson and Frank A. Keil (Cambridge, Mass.: MIT Press 1999) is the best one-volume reference work on all aspects of cognitive science: psychology,

linguistics, neuroscience and philosophy. A more advanced introduction to the issues discussed in this chapter is Kim Sterelny’s The Representational Theory of Mind: an Introduction (Oxford: Blackwell 1990). Fodor first introduced his theory in The Language of Thought (Hassocks: Harvester 1975), but the best account of it is probably Psychosemantics: the Problem of Meaning in the Philosophy of Mind (Cambridge, Mass.: MIT Press 1987; especially Chapter 1 and the appendix), which, like everything of Fodor’s, is written in a lively, readable and humorous style. See also the essay ‘Fodor’s guide to mental representation’ in his collection A Theory of Content and Other Essays (Cambridge, Mass.: MIT Press 1990). The influential modularity thesis was introduced in The Modularity of Mind (Cambridge, Mass.: MIT Press 1983), and Fodor’s latest views on this thesis and on the computa- tional theory of mind in general can be found in The Mind Doesn’t Work That Way (Cambridge, Mass.: MIT Press 2000). One of Fodor’s persistent critics has been Daniel Dennett; his early essay ‘A cure for the common code?’ in Brainstorms (Hassocks: Harvester 1978; reprinted by Penguin Books in 1997) is still an important source of ideas for those opposed to the Mentalese hypothesis. A collection of articles, many of which are concerned with questions raised in this chapter, is William G. Lycan (ed.) Mind and Cognition (Oxford: Blackwell, 2nd edn 1998). David Marr’s Vision (San Francisco, Calif.: Freeman 1982) is a classic text on the computational theory of vision; Chapter 4 of Sterelny’s book (see above) gives a good account from a philosopher’s point of view. Steven Pinker’s The Language Instinct (Harmondsworth: Penguin 1994) is a brilliant and readable exposi- tion of the Chomskian view of language, and much more besides. For mental imagery, see Stephen Kosslyn’s Image and Brain (Cambridge, Mass.: MIT Press 1994). A simple introduction to connectionism can be found in the chapter on connectionism in the second edition of Paul Churchland’s Matter and Consciousness (Cambridge, Mass.: MIT Press 1988), and there is also a chapter on connectionism in Sterelny’s book. An excellent summary, intelligible to the non-specialist, is Brian McLaughlin’s ‘Computationalism, connectionism and the philosophy of mind’ in The Blackwell Guide to Computation and Information (Oxford: Blackwell 2002).

5

Explaining mental representation

The last two chapters have involved something of a detour through some of the philosophical controversies surrounding the computa- tional theory of the mind and artificial intelligence. It is now time to return to the problem of representation, introduced in Chapter 1. How has our discussion of the computational theory of the mind helped us in understanding this problems?

On the one hand, it has helped to suggest answers. For we saw that the idea of a computer illustrates how representations can also be things that have causes and effects. Also, the standard idea of a computational process – that is, a rule-governed causal process in- volving structured representations enables us to see how a merely mechanical device can digest, store and process representations. And, though it may not be plausible to suppose that the whole mind is like this, in Chapter 4 we examined some ways in which thought- processes at least could be computational.

But, on the other hand, the computational theory of the mind does not, in itself, tell us what makes something a representation. The reason for this is simple: the notion of computation takes rep- resentation for granted. A computational process is, by definition, a rule-governed or systematic relation among representations. To say that some process or state is computational does not explain its representational nature, it presupposes it. Or, to put it another way, to say merely that there is a language of thought is not to say what makes the words and sentences in it mean anything.

This brings us, then, to the topic of this final chapter how should the mechanical view of the mind explain representation?

Reduction and definition

The mechanical view of the mind is a naturalistic view it treats the mind as part of nature, where ‘nature’ is understood as the subject

169

Explaining mental representation

matter of natural science. In this view, an explanation of the mind needs an explanation of how the mind fits into the rest of nature, so understood. In this book, I have been considering the more specific question: how can mental representation fit into the rest of nature? One way to answer this question is simply to accept representation as a basic natural feature of the world. There are many kinds of natural objects and natural features of the world – organisms, hor- mones, electric charge, chemical elements, etc. – and some of them are basic while others are not. By ‘basic’, I mean that they need not, or cannot, be further explained in terms of other facts or concepts. In physics, for example, the concept of energy is accepted as basic – there is no explanation of energy in terms of any other concepts. Why not take representation, then, as one of the basic features of the world?

This view could defend itself by appealing to the idea that repre- sentation is a theoretical notion a notion whose nature is explained by the theories in which it belongs (rather like the notion electron). Remember the discussion of theories in Chapter 2. There, we saw that, according to one influential view, the nature of a theoretical entity is exhausted by the things the theory says about it. The same sorts of things can be said about representation: representation is just what the theory of representation tells us it is. There is no need to ask any further questions about its nature.

I shall return to this sort of theory at the end of the chapter. But, to most naturalistic philosophers, it is an unsatisfactory approach to the problem. They would say that representation is still a philo- sophically problematic concept, and we get no real understanding of it by accepting it (or the theory of it) as primitive. They would say: consider what we know about the rest of nature. We know, for example, that light is electromagnetic radiation. In learning how light is related to other electromagnetic phenomena, we find out something ‘deeper’ of the nature of light. We find out what light fundamentally is. This is the sort of understanding that we need of the notion of representation. Jerry Fodor puts the point in this way:

170

I suppose sooner or later the physicists will complete the catalogue they’ve been compiling of the ultimate and irreducible properties of things. When they do, the [microphysical properties] spin, charm, and charge will perhaps appear on their list. But aboutness surely won’t: intentionality simply doesn’t go that deep.1

Whatever we think about such views, it is clear that what Fodor and many other philosophers want is an explanation of intentionality in other terms – that is, in terms of concepts other than the concepts of representation. There are a number of ways in which this could be done. One obvious way would be to give necessary and sufficient conditions for claims of the form ‘X represents Y’. (The concepts of necessary and sufficient conditions were explained in Chapter

1.) Necessary and sufficient conditions for ‘X represents Y’ will be those conditions which hold when, and only when, X represents Y – described in terms that don’t mention the concept of representa- tion at all. To put this precisely and neatly, we need the technical term ‘if and only if’. (Remember that, as ‘A if B’ expresses the idea that B is a sufficient condition for A and ‘A only if B’ expresses the idea that B is a necessary condition for A, we can express the idea that B is a necessary and sufficient condition for A by saying ‘A if and only if B’.)

The present claim about representation can then be described by the principle of the following form, which I shall label (R):2

(R) X represents Y if and only if

So, for example, in Chapter 1 I considered the idea that the basis of pictorial representation might be resemblance. We could express this as follows:

X (pictorially) represents Y if and only if X resembles Y.

Here the ’ is filled in by the idea of resemblance. (Of course, we found this idea inadequate – but here it is just being used as an example.)

The principle (R) defines the concept of representation by reducing it to other concepts. For this reason, it can be called a reductive defi- nition of the concept of representation. Reductive definitions have

been thought by many philosophers to give the nature or essence of a concept. But it is important to be aware that not all definitions are reductive. To illustrate this, let’s take the example of colour. Many naturalistic philosophers have wanted to give a reductive account of the place of colours in the natural world. Often, they have tried to formulate a reductive definition of what it is for an object to have a certain colour in terms of (say) the wavelength of the light it reflects. So they might express such a definition as follows:

X is red if and only if X reflects light of wavelength N, where N

is some number.

There is a fascinating debate about whether colours can be reduc- tively defined in (anything like) this way.3 But my present concern is not with the theory of colour, but just to use it as an illustration of a point about definition. For some philosophers think that it is a mistake to aim for a reductive definition of colour at all. They think that the most we can really expect is a definition of colour in terms of how things look to normal perceivers. For instance:

X is red if and only if X looks red to normal perceivers in normal circumstances.

This is not a wholly reductive definition, because being red is not defined in other terms – the right-hand side of the definition men- tions looking red. Some philosophers think something similar about the notion of representation or content we should not expect to be able to define the concept of representation in other terms. I shall return to this at the end of the chapter.

Conceptual and naturalistic definitions

The example of colour serves to illustrate another point about defi- nitions in terms of necessary and sufficient conditions. One reason why one might prefer 2 (the non-reductive definition of being red) to 1 is that 2 does not go beyond what we know when we under-

stand the concept of the colour red. As soon as we understand the concept of red, we can understand that red things look red to normal perceivers in normal circumstances, and that things which look red to normal perceivers in normal circumstances are red. But, in order to understand the concept of red, we don’t need to know anything about wavelengths of light or reflectance. So 1 tells us more than what we know when we know the concept.

We can put this by saying that 2, unlike 1, attempts to give con- ceptually necessary and sufficient conditions for being red. It gives those conditions which in some sense ‘define the concept’ of red. On the other hand, 1 does not define the concept of red. There surely are people who have the concept of red, who can use the concept red and yet who have never heard of wavelengths, let alone know that light is electromagnetic radiation. Instead, 1 gives what we could call naturalistic necessary and sufficient conditions of being red: it tells us in scientific terms what it is for something to be red. (Naturalistic necessary and sufficient conditions for being red are sometimes called ‘nomological’ conditions, as they characterise the concept in terms of natural laws – ‘nomos’ is the Greek for ‘law’.)

The idea of a naturalistic necessary (or sufficient) condition should not be hard to grasp in general. When we say that you need oxygen to stay alive, we are saying that oxygen is a necessary condition for life: if you are alive, then you are getting oxygen. But this is arguably not part of the concept of life, because there is noth- ing wrong with saying that something could be alive in a way that does not require oxygen. We can make sense of the idea that there is life on Mars without supposing that there is oxygen on Mars. So the presence of oxygen is a naturalistic necessary condition for life, rather than a conceptual necessary condition.

Some philosophers doubt whether there are any interesting reductive conceptually necessary and sufficient conditions that is, conditions which give reductive conceptual definitions of concepts.4 They argue, inspired by Quine or Wittgenstein, that even the sorts of examples which have been traditionally used to illustrate the idea of conceptual necessary and sufficient conditions are problematic. Take Quine’s famous example of the concept bachelor. It looks

extremely plausible at first that the concept of a bachelor is the concept of an unmarried man. To put it in terms of necessary and sufficient conditions:

X is a bachelor if and only if X is an unmarried man.

This looks reasonable, until we consider some odd cases. Does a bachelor have to be a man who has never married, or can the term apply to someone who is divorced or widowed? What about a fifteen-year-old male youth is he a bachelor, or do you have to be over a certain age? If so, what age? Is the Pope a bachelor, or does a religious vocation prevent his inclusion? Was Jesus a bachelor? Or does the concept only apply to men at certain times and in certain cultures?

Of course, we could always legislate that bachelors are all those men above the age of twenty-five who have never been married and who do not belong to any religious order . . . and so on, as we chose. But the point is that we are legislating we are making a new decision, and thus going beyond what we know when we know the concept. The surprising truth is that the concept does not, by itself, tell us where to draw the line around all bachelors. The argument says that because many (perhaps most) concepts are like this, it therefore begins to look impossible to give informative conceptual necessary and sufficient conditions for these concepts.5

Now I don’t want to enter this debate about the nature of con- cepts here. I mention the issue only to illustrate a way in which one might be suspicious of the idea of conceptually necessary and sufficient conditions which are also reductive. The idea is that it is hard enough to get such conditions for a fairly simple concept like bachelor – so how much harder will it will be for concepts like mental representation?

Many philosophers have drawn the conclusion that if we want re- ductive definitions we should instead look for naturalistic necessary and sufficient conditions for the concept of mental representation. The ‘’ in our principle (R) would be filled in by a description of the naturalistic facts (e.g. physical, chemical or biological facts) which underpin representation. These would be naturalistic reduc- tive necessary and sufficient conditions for representation.

What could these conditions be? Jerry Fodor has said that only two options have ever been seriously proposed: resemblance and causation.6 That is, either the ‘’ is filled in by some claim about X resembling Y in some way or it is filled in by some claim about the causal relation between X and Y. To be sure, there may be other possibilities for reductive theories of representation – but Fodor is certainly right that resemblance and causation have been the main ideas actually appealed to by naturalist philosophers. In Chapter 1, I discussed, and dismissed, resemblance theories of pictorial representation. A resemblance theory for other kinds of representation (e.g. words) seems even less plausible, and the idea that all representation can be explained in terms of pictorial repre- sentation is, as we saw, hopeless. So most of the rest of this chapter will outline the elements of the main alternative: causal theories of representation.

Causal theories of mental representation

In a way, it is obvious that naturalist philosophers would try to explain mental representation in terms of causation. For part of naturalism is what I am calling the causal picture of states of mind: the mind fits into the causal order of the world and its behaviour is covered by the same sorts of causal laws as other things in nature (see Chapter 2). The question we have been addressing on behalf of the naturalists is: how does mental representation fit into all this? It is almost obvious that they should answer that representation is ultimately a causal relation – or, more precisely, that it is based on certain causal relations.

In fact, it seems that common-sense already recognises one sense in which representation or meaning can be a causal concept. H.P. Grice noticed that the concept of meaning is used in very different ways in the following two sentences:7

A red light means stop.

Those spots mean measles.

It is a truism that the fact that a red light means stop is a matter of convention. There is nothing about the colour red that connects it to stopping. Amber would have done just as well. On the other hand, the fact that the spots ‘mean’ measles is not a matter of conven- tion. Unlike the red light, there is something about the spots that connects them to measles. The spots are symptoms of measles, and because of this can be used to detect the presence of measles. Red lights, on the other hand, are not symptoms of stopping. The spots are, if you like, natural signs or natural representations of measles: they stand for the presence of measles. Likewise, we say that ‘smoke means fire’, ‘those clouds mean thunder’ and what we mean is that smoke and clouds are natural signs (or representations) of fire and thunder. Grice called this kind of representation ‘natural meaning’.

Natural meaning is just a kind of causal correlation. Just as the spots are the effects of measles, the smoke is an effect of the fire and the clouds are the effects of a cause that is also the cause of thunder. The clouds, the smoke and the spots are all correlated causally with the things that we say they ‘mean’: thunder, fire and measles. Certain causal theories of mental representation think that causal correlations between thoughts and the things they represent can form the natural basis of representation. But how, exactly?

It would of course be too simple to say that X represents Y when, and only when, Y causes X. (This is what Fodor calls the ‘crude causal theory’.8) I can have thoughts about sheep, but it is certainly not true that each of these thoughts is caused by a sheep. When a child gets to sleep at night by counting sheep, these thoughts about sheep need not be caused by sheep. Conversely, it doesn’t have to be true that when a mental state is caused by a sheep, it will represent a sheep. On a dark night, a startled sheep might cause me to be afraid – but I might be afraid because I represent the sheep as a dog, or a ghost.

In both these cases, what is missing is the idea that there is any natural and/or regular causal link between sheep and the thoughts in question. It is mere convention that associates sheep with the de- sire to get to sleep, and it is a mere accident that a sheep caused me to be afraid. If mental representation is going to be based on causal

correlation, it will have to be based on natural regularities as with smoke and fire – not merely on a causal connection alone.9

Let’s introduce a standard technical term for this sort of natural regularity: call the relation between X and Y, when X is a natural sign of Y, reliable indication. In general, X reliably indicates Y when there is a reliable causal link between X and Y. So, smoke reliably indicates fire, clouds reliably indicate thunder, and the spots reliably indicate measles. Our next attempt at a theory of representation can then be put as follows:

X represents Y if and only if X reliably indicates Y

Applied to mental states, we can say that a mental state represents Y if and only if there is a reliable causal correlation between this type of mental state and Y.

An obvious initial difficulty is that we can have many kinds of thought which are not causally correlated with anything at all. I can think about unicorns, about Santa Claus and about other non-exist- ent things but these ‘things’ cannot cause anything, as they do not exist. Also, I can think about numbers, and about other mathemati- cal entities such as sets and functions but, even if these things do exist, they cannot cause anything because they certainly do not exist in space and time. (A cause and its effects must exist in time if one is going to precede the other.) And, finally, I can think about events in the future but events in the future cannot cause anything in the present because causes must precede their effects. How can causal theories of representation deal with these cases?

Causal theorists normally treat these sorts of cases as in some way special, and the result of the very complicated thought-produc- ing mechanisms we have. Let’s take things slowly, they will say: start with the simple cases, the basic thoughts about the perceived environment, the basic drives (for food, drink, sex, warmth, etc.). If we can explain the representational powers of these states in terms of a notion like indication, then we can try and deal with the complex cases later. After all, if we can’t explain the simple cases in terms of notions like indication, we won’t have much luck with the complex cases. So there’s no point starting with the complex cases.

The advantages of a causal theory of mental representation for naturalistic philosophers are obvious. Reliable indication is eve- rywhere: wherever there is this kind of causal correlation there is indication. So, as indication is not a mysterious phenomenon, and not one unique to the mind, it would be a clear advance if we could explain mental representation in terms of it. If the suggestion works, then we would be on our way to explaining how mental representa- tion is constituted by natural causal relations, and, ultimately, how mental representation fits into the natural world.

The problem of error

However, the ubiquity of indication also presents some of the major problems for the causal approach. For one thing (a), as representa- tions will always indicate something, it is hard to see how they can ever misrepresent. For another (b), there are many phenomena which are reliably causally correlated with mental representations, yet which are not in any sense the items represented by them. These two problems are related they are both features of the fact that causal theories of representation have a hard time accounting for errors in thought. This will take a little explanation.

Take the first problem, (a), first. Consider again Grice’s example of measles. We said that the spots represent measles because they are reliable indicators of measles. In general, if there are no spots, then there is no measles. But is the converse true – could there be spots without measles? That is to say, could the spots misrepresent measles? Well, someone could have similar spots, because they have some other sort of disease smallpox, for example. But these spots would then be indicators of smallpox. So the theory would have to say that they don’t misrepresent measles they represent what they indicate, namely smallpox.

Of course, we could make a mistake, and look at the smallpox spots and conclude: measles! But this is irrelevant. The theory is meant to explain the representational powers of our minds in terms of reliable indication on this theory, we cannot appeal to the inter-

pretation we give of a phenomenon in explaining what it represents. This would get matters the wrong way round.

The problem is that, because what X represents is explained in terms of reliable indication, X cannot represent something it does not indicate. Grice made the point by observing that, where natural meaning is concerned, X means that p entails p smoke’s meaning fire entails that there is fire. In general, it seems that, when X natu- rally means Y, this guarantees the existence of Y – but few mental representations guarantee the existence of what they represent. It is undeniable that our thoughts can represent something as the case even when it is not the case: error in mental representation is pos- sible. So a theory of representation which cannot allow error can never form the basis of mental representation. For want of a better term, let’s call this the ‘misrepresentation problem’.

This problem is closely related to the other problem for the in- dication theory, which is known (for reasons I shall explain) as the ‘disjunction problem’. Suppose that I am able to recognise sheep I am able to perceive sheep when sheep are around. My perceptions of sheep are representations of some sort call them ‘S-representa- tions’ for short – and they are reliable indicators of sheep, and the theory therefore says that they represent sheep. So far so good.

But suppose too that, in certain circumstances say, at a dis- tance, in bad light – I am unable to distinguish sheep from goats. And suppose that this connection is quite systematic: there is a reli- able connection between goats-in-certain-circumstances and sheep perceptions. I have an S-representation when I see a goat. This looks like a clear case of misrepresentation: my S-representation misrep- resents a goat as a sheep. But, if my S-representations are reliable indicators of goats-in-certain-circumstances, then why shouldn’t we say instead that they represent goats-in-certain-circumstances as well as sheep? Indeed, surely the indication theory will have to say something like this, as reliable indication alone is supposed to be the source of representation.

The problem, then, is that both sheep and goats-in-certain-cir- cumstances are reliably indicated by S-representations. So it looks like we should say that an S-representation represents that either

a sheep is present or a goat-in-certain-circumstances is present. The content of the representation, then, should be sheep or goat- in-certain-circumstances. This is called the ‘disjunction problem’ because logicians call the linking of two or more terms with an ‘or’ a disjunction.10

In case you think that this sort of example is a mere philosophical fantasy, consider this real-life example from cognitive ethology. The ethologists D.L. Cheney and R.M. Seyfarth have studied the alarm calls of vervet monkeys, and have conjectured that different types of call have different meanings, depending on what the particular call is provoked by. A particular kind of call, for example, is produced in the presence of leopards, and so is labelled by them a ‘leopard alarm’. But:

[T]he meaning of leopard alarm is, from the monkey’s point of view, only as precise as it needs to be. In Amboseli, where leopards hunt vervets but lions and cheetahs do not, leopard alarm could mean, ‘big spotted cat that isn’t a cheetah’ or ‘big spotted cat with the shorter legs’ . . . In other areas of Africa, where cheetahs do hunt vervets, leopard alarm could mean ‘leopard or cheetah’.11

These ethologists are quite happy to attribute disjunctive contents to the monkeys’ leopard alarms. The disjunction problem arises when we ask what it would be to misrepresent a cheetah as a leopard. Saying that the meaning of the alarm is ‘only as precise as it needs to be’ does not answer this question, but avoids it.

Let me summarise the structure of the two problems. The misrep- resentation problem is that, if reliable indication is supposed to be a necessary condition of representation, then X cannot represent Y in the absence of Y. If it is a necessary condition for some spots to represent measles that they indicate measles, then the spots cannot represent measles in the absence of measles.

The disjunction problem is that, if reliable indication is supposed to be a sufficient condition of representation, then whatever X in- dicates will be represented by X. If it is a sufficient condition for an S-representation to represent a sheep that it reliably indicates sheep, then it will also be a sufficient condition for an S-representation to

represent a goat-in-certain-circumstances that it indicates a goat- in-certain-circumstances. Whatever is indicated by a representation is represented by it: so the content of the S-representation will be sheep or goat-in-certain-circumstances.

Obviously, the two problems are related. They are both aspects of the problem that, according to the indication theory, error is not really possible.12 The misrepresentation problem makes error impossible by ruling out the representation of some situation (mea- sles) when the situation does not exist. The disjunction problem, however, makes error impossible by ruling in the representation of too many situations (sheep-or-goats). In both cases, the indication theory gives the wrong answer to the question ‘What does this representation represent?’.

How can the indication theory respond to these problems? The standard way of responding is to hold that, when something misrep- resents, that means that conditions for representation (either inside or outside the organism) are not perfect: as Robert Cummins puts it, misrepresentation is malfunctioning.13 When conditions are ideal then there will not be any failure to represent: spots will represent measles in ideal conditions, and my S-representations will represent sheep (and not goats) in ideal conditions.

The idea, then, is that representation is definable as reliable indication in ideal conditions:

X represents Y if and only if X is a reliable indicator of Y in ideal conditions.

Error results from the conditions failing to be ideal in some way: bad light, distance, impairment of the sense organs, etc. (Ideal con- ditions are sometimes called ‘normal’ conditions.) But how should we characterise, in general, what ideal conditions are? Obviously, we can’t say that ideal conditions are those conditions in which representation takes place, otherwise our account will be circular and uninformative:

X represents Y if and only if X reliably indicates Y in those conditions in which X represents Y.

What we need is a way of specifying ideal conditions without mentioning representation.

Fred Dretske, one of the pioneers of the indication approach, tried to solve this problem by appealing to the idea of the teleological function of a representation.14 This is a different sense of ‘function’ from the mathematical notion described in Chapter 3: ‘teleological’ means ‘goal-directed’. Teleological functions are normally at- tributed to biological mechanisms, and teleological explanations are explanations in terms of teleological functions. An example of a teleological function is the heart’s function of pumping blood around the body. The idea of function is useful here because (a) it is a notion that is well understood in biology and (b) it is generally accepted that something can have a teleological function even if it is not exercising it: it is the function of the heart to pump blood around the body even when it is not actually doing so. So the idea is that X can represent Y, even when Y is not around, just in case it is X’s function to indicate Y. Ideal conditions are therefore conditions of ‘well-functioning’:15 conditions when everything is functioning as it should.

This suggests how the appeal to teleological functions can deal with what I am calling the misrepresentation problem. X can represent Y if it has the function of indicating Y; and it can have the function of indicating Y even if there is no Y around. Even in the dark, my eyes have the function of indicating the presence of visible objects. So far so good – but can this theory deal with the disjunction problem?

A number of philosophers, including Fodor (who originally fa- voured this sort of approach) have argued that it can’t. The problem is that something very like the disjunction problem applies to tele- ological functions too. The problem is well illustrated by a beautiful example of Dretske’s:

Some marine bacteria have internal magnets (called magnetosomes) that function like compass needles, aligning themselves (and as a result, the bacteria) parallel to the earth’s magnetic field. Since these magnetic lines incline downwards (towards geomagnetic north) in

the northern hemisphere (upwards in the southern hemisphere), bacteria in the northern hemisphere . . . propel themselves towards geomagnetic north. The survival value of magnetotaxis (as this sensory mechanism is called) is not obvious, but it is reasonable to suppose that it functions so as to enable the bacteria to avoid surface water. Since these organisms are capable of living only in the absence of oxygen, movement towards geomagnetic north will take the bacteria away from oxygen-rich surface water and towards the comparatively oxygen-free sediment at the bottom.16

Let’s agree that the organism’s mechanism has a teleological func- tion. But what function does it have? Is its function to propel the bacterium to geomagnetic north or is it to propel the bacterium to the absence of oxygen? On the one hand, the mechanism is itself a magnet; on the other hand, the point of having the magnet inside the organism is to get it to oxygen-free areas.

Perhaps it has both these functions. However, as it needn’t have them both together, we should really say that it has the complex function that we could describe as ‘propelling the bacterium to geomagnetic north OR propelling the bacterium to the absence of oxygen’. And this is where we can see that teleological functions have the same sorts of ‘disjunctive problems’ as indication does. As some people put it, teleological functions are subject to a certain ‘in- determinacy’: it is literally indeterminate which function something has. If this is right, then we cannot use the idea of teleological func- tion to solve the disjunction problem – so long as representation is itself determinate.

For this reason, some causal theorists have turned away from teleological functions. Notable among these is Fodor, who has defended a non-teleological causal theory of mental representation, which he calls the ‘asymmetric dependence’ theory.17 Let’s briefly look at it. (Beginners may wish to skip to the next section.)

Suppose that there are some circumstances in which (to return to our example) sheep cause us to have S-representations. Fodor observes that, if there are conditions in which goats-in-certain-cir- cumstances also cause us to have S-representations, it makes sense

to suppose that goats do this only because sheep already cause S- representations. Although it makes sense to suppose that only sheep might cause representations of sheep, Fodor thinks it doesn’t make that much sense to suppose that only goats might cause representa- tions of sheep. Arguably, if they did this, then S-representations would be goat-representations, not sheep-representations at all. To say that the goat-to-S-representation causal link is an error, then, is to say that goats would not cause S-representations unless sheep did. But sheep would still cause S-representations even if goats didn’t.

It is perhaps easier to grasp the point in the context of percep- tion. Suppose some of my sheep-perceptions are caused by sheep. But some goats look like sheep that is, some of my perceptions of goats (i.e. those caused by goats) seem to me to be like sheep- perceptions. But perceptions caused by goats wouldn’t seem like sheep-perceptions unless perceptions caused by sheep also seem like sheep-perceptions. And the reverse is not the case, i.e. perceptions caused by sheep would still seem like sheep-perceptions even if there were no sheep-perceptions caused by goats.

Fodor expresses this by saying that the causal relation between goats and sheep-representations is asymmetrically dependent on the causal relation between sheep and sheep-representations. What does this technical term mean? Let’s abbreviate ‘cause’ to an arrow, , and let’s abbreviate ‘sheep-representation’ to the upper-case SHEEP. It will also help if we underline the causal claims being made. Fodor says that the causal relation goat SHEEP is dependent on the causal relation sheep SHEEP in the following sense:

If there hadn’t been a sheep SHEEP connection, then there wouldn’t have been a goat SHEEP connection.

But the goat SHEEP connection is asymmetrically dependent on the sheep SHEEP connection because:

If there hadn’t been a goat SHEEP connection, there still would have been a sheep SHEEP connection.

Therefore, there is a dependence between the goat SHEEP

connection and the sheep SHEEP connection, but it is not sym- metrical.

There are two points worth noting about Fodor’s theory. First, the role that the idea of asymmetric dependence plays is simply to answer the disjunction problem. Fodor is essentially happy with indication theories of representation he just thinks you need something like asymmetric dependence to deal with the disjunction problem. So, obviously, if you have some other way of dealing with that problem or you have a theory in which that problem does not arise – then you do not have to face the question of whether asym- metric dependence gives an account of mental representation.

Second, Fodor proposes asymmetric dependence as only a suf- ficient condition of mental representation. That is, he is claiming only that if these conditions (indication and asymmetric depend- ence) hold between X and Y, then X represents Y. He is not saying that any possible kind of mental representation must exhibit the asymmetric dependence structure, but that if something actually exhibits this structure, then it is a mental representation.

For myself, I am unable to see how asymmetric dependence goes any way towards explaining mental representation. I think that the conditions that Fodor describes probably are true of mental repre- sentations. But I do not see how this gives us a deeper understand- ing of how mental representation actually works. In effect, Fodor is saying: error is parasitic on true belief. But it’s hard not to object that this is just what we knew already. The question rather is: what is error? Until we can give some account of error, it does not really help us to say that it is parasitic on true belief. Fodor has, of course, responded to complaints like this – but perhaps it is worth looking for a different approach.

Mental representation and success in action

In the most general terms, the causal theories of mental representa- tion I have sketched so far attempt to identify the content of a belief – what it represents with its cause. And, seen like this, it is obvious why this theory should encounter the problem of error: if every

belief has a cause, and the content of every belief is whatever causes it, then every belief will correctly represent its cause, rather than (in some cases) incorrectly representing something else.

However, there is another way to approach the issue. Rather than concentrating on the causes of beliefs, as indication theories do, we could concentrate on the effects they have on behaviour. As we saw in Chapter 2, what you do is caused by what you believe (i.e. how you take the world to be) and by what you want. Perhaps the causal basis of representation is not to be found simply among the causes of mental states, but among their effects. The reduction of representation should look not just at the inputs to mental states, but at their outputs.

Here’s one idea along these lines, the elements of which we have already encountered in Chapter 2. When we act, we are trying to achieve some goal or satisfy some desire. And what we desire depends in part on how we think things are – if you think you have not yet had any wine, you may desire wine, but if you think you have had some wine, you may desire more wine. That is, desiring wine and desiring more wine are obviously different kinds of desire: you can’t desire more wine unless you think you’ve already have some wine. Now, whether you succeed in your attempts to get what you desire will depend on whether the way you take things to be – your belief is the same as the way things are. If I want some wine, and I believe there is some wine in the fridge, then whether I succeed in getting wine by going to the fridge will depend on whether this belief is correct: that is, it will depend on whether there is wine in the fridge.

(The success of the action going to the fridge will depend on other things too, such as whether the fridge exists, and whether I can move my limbs. But we can ignore these factors at the mo- ment, as we can assume that my belief that there is wine in the fridge involves the belief that the fridge exists, and that I would not normally try and move my limbs unless I believed that I could. So failure on these grounds would imply failure in these other beliefs.) So far, the general idea should be fairly obvious: whether our actions succeed in satisfying our desires depends on whether our

beliefs represent the world correctly. It is hard to object to this idea, except perhaps on account of its vagueness. But it is possible to convert the idea into part of the definition of the representational content of belief. The idea is this. A belief says that the world is a certain way: that there is wine in the fridge, for example. This belief may or may not be correct. Ignoring the complications mentioned in the previous paragraph for the moment, we can say that, if the belief is correct, then actions caused by it plus some desire (e.g. the desire for wine) will succeed in satisfying that desire. So the conditions under which the action succeeds are just those conditions specified by the content of the belief: the way the belief says the world is. For example, the conditions under which my attempt to get wine suc- ceeds are just those conditions specified by the content of my belief: there is wine in the fridge. In a slogan: the content of a belief is identical with the ‘success conditions’ of the actions it causes. Let’s call this the ‘success theory’ of belief content.18

The success theory thus offers us a way of reducing the repre- sentational content of beliefs. Remember the form of a reductive explanation of representation:

(R) X represents Y if and only if

The idea was to fill out the ‘’ without mentioning the idea of representation. The success theory will do this in something like the following way:

A belief B represents condition C if and only if actions caused by B are successful when C obtains.

Here the ‘’ is filled out in a way that, on the face of it, does not mention representation: it only mentions actions caused by beliefs, the success of those actions and conditions obtaining in the world.19

One obvious first objection is that many beliefs cause no actions whatsoever. I believe that the current Prime Minister of the UK does not have a moustache. But this belief has never caused me to do anything before now – what actions could it possibly cause?

This question is easy to answer, if we allow ourselves enough imagination. Imagine, for example, being on a quiz show where

you were asked to list current world leaders without moustaches. Your action (giving the name of the current Prime Minister) would succeed if the condition represented by your belief that the present Prime Minister does not have a moustache – obtains. The situation may be fanciful, but that does not matter. What matters is that it is always possible to think of some situation where a belief would issue in action. However, this means that we have to revise our definition of the success theory, to include possible situations. A simple change from the indicative to the subjunctive can achieve this:

A belief B represents condition C if and only if actions which would be caused by B would succeed were C to obtain.

This formulation should give the general idea of what the success theory says.

There is a general difficulty concerning the definition of the key idea of success. What does success in action actually amount to? As I introduced the theory earlier, it is the fact that the action satisfies the desire which partly causes it. My desire is for wine; I believe there is wine in the fridge; this belief and desire conspire to cause me to go to the fridge. My action is successful if I get wine, i.e. if my desire is satisfied. So we should fill out the theory’s definition as follows:

A belief B represents condition C if and only if actions which would be caused by B and a desire D would satisfy D were C to obtain.

Though a bit more complicated, this is still a reductive definition: the idea of representation does not appear in the part of the defini- tion which occurs after the ‘if and only if’.

But we might still wonder what the satisfaction of desires is.20 It cannot simply be the ceasing of a desire, because there are too many ways in which a desire may cease which are not ways of satisfying the desire. My desire for wine may cease if I suddenly come to desire something else more, or if the roof falls in, or if I die. But these are not ways of satisfying the desire. Nor can the satisfaction of my desire be a matter of my believing that the desire is satisfied. If you hypnotise me into thinking that I have drunk some wine, you have

not really satisfied my desire. For I have not got want I wanted, namely wine.

No: the satisfaction of my desire for wine is a matter of bringing about a state of affairs in the world. Which state of affairs? The answer is obvious: the state of affairs represented by the desire. So, to fill out our definition of the success theory, we must say:

A belief B represents condition C if and only if actions which would be caused by B and a desire D would bring about the state of affairs represented by D were C to obtain.

Now the problem is obvious: the definition of representation for be- liefs contains the idea of the state of affairs represented by a desire. The representational nature of beliefs is explained in terms of the representational nature of desires. We are back where we started.21

So, if the success theory is going to pursue its goal of a reductive theory of mental representation, it has to explain the representa- tional nature of desires without employing the idea of representa- tion. There are a number of ways that they might do this. Here I shall focus on the idea that mental states have teleological functions – specifically, biological functions. I’ll call this the biological theory of mental representation; versions of the theory have been defended by Ruth Millikan and David Papineau.22

Mental representation and biological function

The biological theory assumes that desires have some evolutionary purpose or function – that is, that they play some role in enhancing the survival of the organism, and hence the species. In some cases, there does seem an obvious connection between certain desires and the enhanced survival of the organisms of the species. Take the de- sire for water. If organisms like us do not get water, then they don’t survive very long. So, from the point of view of natural selection, it is clearly a good thing to have states which motivate or cause us to get water: and this is surely part of what a desire for water is.

However, it is one thing to say that desires must have had some evolutionary origin, or even an evolutionary purpose, and another

to say that their contents what they represent can be explained in terms of these purposes. The biological theory takes this more radical line. It claims that natural selection has ensured that we are in states whose function it is to cause a situation which enhances our survival. These states are desires, and the situations are their contents. So, for example, getting water enhances our survival, so natural selection has made sure that we are in states that cause us (other things being equal) to get water. The content of these states is (something like) I have water because our survival has been enhanced when these states cause a state of affairs where I have water.

The success of an action, then, is a matter of its bringing about a survival-enhancing state of affairs. In reducing the representational contents of beliefs and desires, the theory works from the ‘outside in’: first establish which states of affairs enhance the organism’s survival, then find states whose function it is to cause these states of affairs. These are desires, and they represent those states of affairs. This is how the representational powers of desires are explained.

Once we have an explanation of the representational powers of desires, we can plug it into our explanation of the representational powers of beliefs. (This is not how all versions of the biological theory work; but it is a natural suggestion.) Remember that the suc- cess theory explained these in terms of the satisfaction of desires by actions. But we discovered that the satisfaction of desires involved a tacit appeal to what desires represent. This can now be explained in terms of the biological function of desires in enhancing the survival of the organism. If this ingenious theory works, then it clearly gives us a reductive explanation of mental representation.

But does it work? The theory explains the representational con- tent of a given belief in terms of those conditions in which actions caused by the belief and a desire succeed in satisfying the desire. The satisfaction of desire is explained in terms of the desire bringing about conditions which enhance the survival of the organism. Let’s ignore for a moment the obvious point that people can have many desires e.g. the desire to be famous for jumping off the Golden Gate bridge which clearly have little to do with enhancing our survival.

Remember that the theory is trying to deal with our most basic thoughts and motivations beliefs and desires about food, sex, warmth, etc. and not yet with more sophisticated mental states. Later in this chapter we will scrutinise this a little more (‘Against reduction and definition’, p. 200).

What I want to focus on here is an obvious consequence of the biological theory: if a creature has desires then it has evolved. That is, the theory makes it a condition of something’s having desires that it is the product of evolution by natural selection. For the theory says that a desire is just a state to which natural selection has awarded a certain biological function: to cause behaviour that enhances the survival of the organism. If an organism is in one of these states, then natural selection has ensured that it is in it. If the state hadn’t been selected for, then the organism wouldn’t be in that state.

The problem with this is that it doesn’t seem impossible that there should be a creature which had thoughts but which had not evolved. Suppose, for the sake of argument, that thinkers are made up of matter – that if you took all of a thinker’s matter away, there would be nothing left. Surely it is just the converse of this that it is possible in principle to rebuild the thinker – to put all its matter back together and it would still be a thinker. And if you can rebuild a thinker, then why can’t you build another thinker along the same lines? It appears at first sight that the biological theory of mental representation would rule out this possibility. But, though highly unlikely, it doesn’t seem to be absolutely impossible indeed, the coherence of ‘teletransportation’ of the sort described in Star Trek seems to depend on it.

But the biological theory needn’t admit that this is impossible. What is central to the theory is that the creature’s states should have a function. But functions can be acquired in various ways. In the case of an artificially created thinker, the theory can say that its states obtain their function because they are assigned functions by their creator. So, just as an artificial heart can acquire a function by being designed and used as a heart, so an artificial person’s in- ner states might acquire functions by being designed and used as

desires. These states only have derived intentionality, rather than original intentionality (see Chapter 1, ‘Intentionality’). But derived intentionality is still intentionality of a sort.

However, why couldn’t there be a thinker who is not designed at all? Couldn’t there be a thinker who came into existence by ac- cident? Donald Davidson has described an imaginary situation in which lightning strikes a swamp and by an amazing coincidence synthesizes the chemicals in the swamp to create a replica of a hu- man being.23 This person called ‘swampman’ has all the physical and chemical states of a normal human being; let’s suppose he is a physical replica of me. But swampman (or swamp-me) has no evolutionary history, he is a mere freak accident. He looks like me, walks like me, makes sounds like me: but he has not evolved.

Would swampman have any mental states? Physicalists who believe that mental states are completely determined by the local physical states of the body must say ‘Yes’. In fact, they must say that, at the moment of his accidental creation, swampman will have almost all the same mental states as me – thoughts and conscious states – except for those, of course, which depend on our different contexts and spatio-temporal locations. But the biological theory of mental representation denies that swampman has any representa- tional mental states at all, as, to have representational mental states, a creature must have been the product of evolution by natural selection. So if swampman is a thinker, then the biological theory of mental representation is false. So the biological theory must deny the possibility of swampman. But how can they deny this mere pos- sibility? Here is how David Papineau responds:

[T]he theory is intended as a theoretical reduction of the everyday no- tion of representational content, not as a piece of conceptual analysis. And as such it can be expected to overturn some of the intuitive judge- ments we are inclined to make on the basis of the everyday notion. Consider, for example, the theoretical reduction of the everyday notion of a liquid to the notion of the state of matter in which the molecules cohere but form no long-range order. This is clearly not a conceptual analysis of the everyday concept, since the everyday concept presup-

poses nothing about molecular structure. In consequence, this reduc- tion corrects some of the judgements which flow from the everyday concept, such as the judgement that glass is not a liquid.24

We distinguished between conceptual and naturalistic defini- tions earlier in this chapter and, as this quotation makes clear, the biological theory is offering the latter. The defence against the swampman example is that our intuitive judgements about what is and is not possible are misleading us. If Papineau’s theory is right, then what we thought was allowed by the ordinary concept actually isn’t. Similarly, the ordinary concept of a liquid seems to rule out glass from being a liquid – but nonetheless, it is.

This response may make it look as if denying that swampman is a thinker is just one unfortunate counterintuitive side effect of the biological theory, which we must accept because of the other explanatory advantages of the theory. But, in fact, the situation is much more extreme than that. For the denial that swampman has any thoughts comes from the denial that his belief-forming mecha- nisms have any biological function where a mechanism’s having a function is understood in terms of its actual causal history in bring- ing about certain effects which have actually enhanced the survival of its host’s creature. (This is the so-called ‘aetiological’ reading of the notion of biological function.25) So: no actual evolutionary his- tory, no function.

But, of course, this way of understanding of biological function is not restricted to the mental. This notion of function also applies to all other biological organs which are credited with having a function. So, if swampman has no thoughts, he also has no brain – because a brain is defined in terms of its many functions, and by the aetiological conception, swampman’s brain has no function. By the same reasoning, swampman has no heart. And because blood is doubtless defined by its function, he has no blood either. He just has something which looks like a heart, which pumps something that looks like blood around something that looks like a human body, sustaining the activity of something that looks like a brain, and giving rise to something that ‘looks like’ thought. In fact, why am I

calling swampman ‘he’ at all? On this view, he is not a man, but just something that looks like a man.

So, if the biological theory of mental representation is committed to swampman’s not having thoughts, it looks as if it is committed to swampman’s not being an organism, for the same reason. What is doing the work here is the conception of biological function which the theory is using. If we find the consequence of the theory implau- sible, then we could reject that conception of function, or we could reject the theory outright.26 Given what has just been said, and the difficulties which I will outline in a while (‘Against reduction and definition’), I would prefer to reject the theory. But the idea that rep- resentation has a basis in the biological facts about organisms has a lot of plausibility for a believer in the mechanical mind. Of course, a believer in the mechanical mind holds that human beings are fundamentally biological entities. The question is, however, in what way can biological explanations help us understand the nature of mental capacities, and mental representation in particular? Is there a general answer to this question? Some philosophers, influenced by evolutionary psychology, think there is. It will be useful, therefore, to make a brief digression into evolutionary psychology, before returning to our main theme of mental representation.

Evolution and the mind

One way to understand the biological theory of mental representa- tion is to see it as part of the wider project of understanding mental capacities in terms of evolutionary biological explanation, known as evolutionary psychology.27 Evolutionary psychology is not just the claim (accepted by all scientifically informed people) that human beings, creatures with mental capacities, evolved from earlier species of apes in a long and complex process starting some seven million years ago. This is a truth as solid as anything in science, and (give or take some details and dates) is not up for dispute. Evolutionary psychology is the more specific and controversial claim that many mental capacities and faculties can be explained by considering them to be adaptations in the evolutionary biologist’s sense. An

adaptation is a trait or capacity whose nature can be explained as the product of natural selection. The drab plumage of certain birds, for example, can be explained by the fact that those of their remote ancestors with drab plumage were better able to camouflage them- selves among plants, and therefore survive predators, and therefore breed, and therefore pass on their plumage to their offspring . . . and so on. The birds’ plumage, it is concluded, is an adaptation.28

There is a debate among evolutionary biologists about what the units or the ‘currency’ of natural selection are. What does natural selection select among? Some say it selects among organisms to find the fittest for survival. Others, such as Richard Dawkins, think that this does not get to the heart of the matter, and argue that the basic unit of selection is the gene: organisms are ‘vehicles’ for carry- ing their genes, and conveying that genetic material by replicating into future generations (this is what Dawkins called the ‘selfish gene’ hypothesis).29 Note that believing that some, or many, human traits are adaptations is not the same as believing that the basic unit of selection is the gene. Nor is believing in adaptations the same as being an adaptationist. Adaptationism is defined in various ways: some say it is the view that all traits are adaptations (a crazy view, as we shall see); others define it as the view that adaptation is optimal: as one commentator puts it, the view is that ‘a model censored of all evolutionary mechanisms except natural selection could predict evolution accurately’.30

Two features of the concept of adaptation are worth noting. First, the inference that something is an adaptation is an inference to the best explanation (see Chapter 5, ‘Modularity of mind’). The adaptive explanation of the bird’s plumage is better than the alternatives, whatever they may be, which gives us a reason to endorse the claim that the plumage is an adaptation. Second, and relatedly, the expla- nation is a form of ‘reverse engineering’: from the observable trait of the bird, the biologist infers the kind of environmental origins in which such a trait would be adaptive, i.e. it would aid the survival of creatures with that trait. Therefore, the evidence for the proposed adaptive explanation would involve at least two things: first, that the adaptive explanation is better than the alternatives, whatever

they may be; and, second, that we have some kind of independent knowledge of the kind of environments in which the presence of such a trait does aid survival.

How might psychological capacities and traits be explained as products of natural selection? We have to be clear, first of all, what it is we are trying to explain. If we focus on behaviour patterns of individuals, then we will not find remotely plausible examples of adaptations. We will only find the sort of pseudo-science that fills Sunday newspapers. It is absurd to explain the behaviour of a rich older man buying an expensive meal in a restaurant for a younger woman by saying that the man wanted to propagate his genes and was attracted to the woman because youth is a good indicator of fertility; and equally absurd to explain the woman’s behaviour in accepting the meal by saying that she wanted to propagate her genes and was attracted to the man because his evident wealth was a good indication that he could provide for her offspring. This kind of thing is absurd partly because the disposition to buy meals in restaurants just could not be an adaptation, and not just because restaurants were invented in eighteenth-century Paris and not in the Pleistocene era.31 Buying meals in restaurants is a complex social activity that has implications for many other social institutions and practices (money, social and class structures, gastronomy, viticulture, etc.). To compare cases like these to things such as the colourful tail of the male peacock is simply to refuse to recognise the real and vast differences between these phenomena. And, without recognizing these differences, we will never move beyond the most superficial understanding of what is going on in restaurants (and, hence, hu- man psychology).

Moreover, as I noted above, arguments for adaptations must rely fundamentally on inference to the best explanation (of which ‘reverse engineering’ arguments are a special case). Maybe the explanation of the man’s behaviour in adaptationist terms would have something to be said for it if there were no other explana- tions around. But, where the explanation of human behaviour is concerned, we are not in this situation. We do not find situations like the one I have just described mysterious or baffling from the

perspective of common-sense psychology. We can imagine any number of common-sense psychological explanations which make so much more sense of this situation than any hypothesis about the couple’s desires to propagate their genes. Unless we add some further assumptions for example, eliminative materialism the explanation of this behaviour in terms of genes is probably one of the worst explanations around. In any case, it has little chance of being the best.

Someone might conceivably respond that it is true that people in this kind of situation do not have conscious beliefs and desires about propagating their genes. But, nonetheless, it could be said that there are deep unconscious mechanisms that lead them to do things like this, and these mechanisms are adaptations. But what reason is there to believe this explanation even in this modified form? The reason cannot be because all traits are adaptations; there is little reason to believe this. In some cases, traits which plausibly evolved for one purpose have become used for others (these are called ‘exaptations’). A classic example is birds’ feathers, which are originally thought to have evolved for insulation, and only later became used for flight. Moreover, there are cases for which we lack any reason to suppose that a trait actually did come about as a result of natural selection at all. To take a controversial example: some thinkers, including Chomsky, argue that this is the case with language. They say that there is no reason to believe that human language is a product of natural selection. As we do not know the circumstances in which having a language actually aided the survival of our ancestors, we are not entitled to assume that it was an adaptation. Of course, we can think of cases in which language might have aided survival. But there is no valid argument to take us from ‘X might have aided survival in circumstances Y’ to ‘X is an adaptation’. Just because something could have come about because it gave an organism a certain survival advantage, this goes no way towards showing that it actually did.32

Nor should we assume (and few do) that everything we do is determined by our genes. Organisms with identical genetic material can develop in very different ways in different environments. The

development and behaviour of organisms is determined by many factors, including their internal genetic dispositions and their general environmental conditions, as well as by freak occurrences and environmental disasters such as floods and ice ages. Evolution, the development of forms of life over time, does not rely on natural selection alone.

In a famous discussion, Stephen J. Gould and Richard Lewontin drew an analogy between adaptationist explanations of traits and spurious explanations of why certain artefacts have the form they have.33 Looking at the fabulous mosaics in the arches of the doorway of St Mark’s basilica in Venice, one might be led to think that the spaces between the arches (called ‘spandrels’) were designed in order that the mosaics might be put there. But this is not so: the spandrels are a mere side effect of the building of the arches, and the inspired artist or artists took advantage of the space to create something beautiful. The spandrels were not built in order to make the mosaics. To argue that they were is to make an analogous mistake of seeing adaptations everywhere. An organism’s traits may arise through many historical processes, and we need sound empirical evidence before claiming natural selection as one of these. In the absence of such evidence, we should not make up adaptationist stories of the circumstances in which certain traits would aid survival.

So it seems that we have no reason to think that every trait of an organism is an adaptation. Perhaps this should not really be very controversial, and the extreme adaptationism mentioned above is really a straw man. Paul Bloom sums up the present attitude of evolutionary biologists as follows:

Modern biologists have elaborated Darwin’s insight that although natural selection is the most important of all evolutionary mechanisms, it is not the only one. Many traits that animals possess are not adapta- tions, but emerge either as by-products of adaptations or through entirely nonselectionist processes, such as random genetic drift. Natural selection is necessary only in order to explain the evolution of what Darwin called ‘organs of extreme perfection and complexity’ such as the heart, the hand and the eye . . . Although there is controversy

about the proper scope of selectionist theories, this much at least is agreed upon, even by those who are most cautious about applying adaptive explanations.34

Assuming that this is a broadly correct account of the present state of knowledge, the upshot is that we need positive reasons to believe that any psychological traits are adaptations. Our example of the rich man and the younger woman may well have been a caricature of a certain kind of adaptationist explanation. But what kinds of example would be more plausible?

Taking our lead from Darwin’s remark quoted above, perhaps we should look for ‘organs of extreme perfection and complexity’ in the mind. Or at least we should look for mental organs of some sort, independently identified as such. Then we would be in a posi- tion to ask the ‘reverse engineering’ question: in what environment would the possession of such an organ have aided the survival of the creatures whose organ it is? The psychologists would then need to look for evidence that the organism in question lived in such a kind of environment, and evidence that organisms developed along the lines suggested.

The best candidates for such mental organs would be relatively isolated, resilient, probably innate mechanisms within the mind, dedicated to specific information-processing tasks. In other words, they would be mental modules in something like the sense described in Chapter 4 (‘The modularity of mind’). The visual system is a prime example of such a module. To establish that the visual system is an adaptation – a claim that would perhaps be found plausible by even the most sceptical of anti-adaptationists – one would have to give a specification of its task, and of the environment in which the performance of this task would aid survival. When in possession of a fairly well-understood mental module, we can raise questions about its function and its evolutionary history in the hope of finding out whether it is an adaptation, just as we can about other organs. (One difficulty, of course, is finding the actual evidence for the past existence of cognitive capacities: as Fodor says, ‘cognition is too soft to leave a paleontological record’.35) It is not surprising, then,

that evolutionary psychologists have tended to adopt the massive modularity thesis described in Chapter 4 the thesis that all aspects of cognition can be broken down into modules. And it is equally unsurprising that critics of evolutionary psychology, such as Fodor, are also those who reject massive modularity. There will be no adaptationist explanation of the cognition underlying, for example, human ‘mating’ behaviour, simply because it is impossible to isolate these cognitive activities away from all the other interlinked activi- ties within which they make sense.

The only conclusion we can draw from this short discussion is that the issues surrounding evolutionary psychology are entangled with controversial issues in evolutionary theory itself – such as the scope of adaptationist explanation, and what that kind of explana- tion amounts to – but that evolutionary psychology is at its strong- est when its explananda are mental modules. Whether we should believe that any modules are adaptations depends, unsurprisingly, on the evidence, not on philosophical theorising – nor on the avail- ability of possible explanations. In any case, it seems plain that the mechanical picture of the mind does not need an evolutionary ac- count of mind. The mind can be integrated into the world of causes and effects even if most mental capacities lack an evolutionary explanation.36

Against reduction and definition

Let’s now return to the project of explaining mental representation by giving a reductive definition of it. Even if this reductive approach manages to solve the disjunction problem, one of the problems that we postponed earlier still remains: how do we explain the represen- tational powers of concepts other than very simple concepts such as water, food, predator and so on. Reductive theories of representation tend to treat this as largely a matter of detail their approach is: let’s get the simple concepts right before moving on to the complex concepts. But, even if they do get the simple concepts right, how exactly are we supposed to move on to the complex concepts? How

are we supposed to explain a concept like (for example) baroque architecture in causal or biological terms?

This question arises for Fodor too. Perhaps Fodor would say that mental representations of baroque architecture are asymmetrically dependent on pieces of baroque architecture – for example, a piece of baroque architecture causes the mental representation baroque architecture, and, even though a piece of Renaissance architecture may cause this mental representation, it wouldn’t do so if the baroque architecture didn’t. But this is very implausible. For one thing, many people have come in contact with baroque architecture without forming any representations of it as baroque; and some people will have come across the concept in books without ever having had causal contact with baroque architecture. So what should Fodor say?

Reductive theories of representation aim to provide some way of filling in the schema,

(R) X represents Y if and only if

in terms that do not mention representation. As Fodor has said, ‘if aboutness is real, it must really be something else’.37 The problem I am raising now is that, if a reductive theory is going to be a theory of all kinds of mental content, then either it has to tell us how we can plausibly fill in the ’ directly for all concepts and contents or it has to give us a systematic method of building up from the concepts it can directly deal with (the ‘simple’ concepts) to those it cannot directly deal with (the ‘complex’ concepts). I have suggested that neither Fodor’s theory nor the biological theory can take the direct route. So these theories must provide us with some idea of how to get from ‘simple’ concepts to ‘complex’ ones. And until we have such an idea we are entitled to suspend belief about whether there can be any such thing as a reductive theory of repre- sentation at all.

(The success theory, on the other hand, doesn’t have any difficul- ties dealing with all contents directly. For it can simply say that a belief has the content P just in case actions caused by that belief and a desire D would succeed in satisfying D just when P is true – and P can be a situation concerning anything whatsoever. But,

as we saw, the success theory cannot provide a genuine reduction of representation unless it can give a reduction of the contents of desires. So as it stands, the success theory is incomplete.)

This line of thought can lead to real worries about the whole idea of explaining mental representation by reducing it by means of a definition such as (R). For, after all, defining something (whether naturalistically or not) is not the only way of explaining it. If I wanted to explain baroque architecture to you, for example, I might take you to see some baroque buildings, pointing out the distinctive features the broken pediments, the cartouches, the extravagant use of line and colour – and contrast the style with earlier and later styles of architecture until you gradually come to have a grasp of the concept. What I would not do is say ‘A building is baroque if and only if ’, with the blank filled in by terms which do not mention the concept baroque. For this case, grasping the concept is not grasping a definition to use Wittgenstein’s phrase, ‘light dawns gradually over the whole’.38

This is not to say that a reductive definition cannot be an ex- planation just that it is not the only kind of explanation. So far in this chapter I have focused on philosophical attempts to explain representation by reducing it by definition. In what remains I want to return to the non-reductive possibility which I mentioned at the opening of this chapter.

As I introduced the idea in Chapter 3, the notion of computation depends on the notion of representation. So, according to reduc- tionists like Fodor, for example, the direction of investigation is as follows. What distinguishes systems that are merely describable as computing functions (such as the solar system) from systems that genuinely do compute functions (such as an adding machine) is that the latter contain and process representations no computation without representation. The aim, then, is to explain representation: we need a reductive theory of representation to vindicate our com- putational theory of cognition in accordance with the naturalistic assumptions mentioned above (‘Reduction and definition’).

But this final move could be rejected. It could be rejected on the grounds that the naturalistic assumptions themselves should be

rejected. Or it could be rejected on the grounds that the computa- tional theory of cognition does not require a reductive account of representation in order to employ the notion of representation. I shall concentrate on this second line of thought.

I want to consider, in a very abstract way, a theory of mental representation which adopts the following strategy.39 What the theory is concerned to explain is the behaviour of organisms in their environments. This behaviour is plausibly seen as representa- tional as directed at goals, as attempting to satisfy the organism’s desires and aims (e.g. searching for food). The theory claims that the best explanation of how this behaviour is produced is to view it as the product of computational processes to view it, that is, as computing a ‘cognitive function’: a function whose arguments and values are representations which have some cognitive relation to one another (in the way described in Chapter 4: ‘The argument for the language of thought’). As computations are (of their very nature) defined in terms of representations, certain inner states of the organism, as well as the inputs and outputs, must be treated as representations. These states are the states involved in the computa- tion, so they must have a specification which is not given in terms of what they represent a specification in purely formal or ‘syn- tactic’ terms. And to treat a state as a representation is to specify a mapping from the state itself described in purely formal terms to its abstract representational content. This mapping is known as an ‘interpretation function’. The picture which results is what Cummins calls the ‘Tower Bridge’ picture (see Figure 5.1).40

Based on this view, it’s not as if we have to find the states of the organism which we can tell are representations on independent grounds that is, on grounds independent of the computations that we attribute to the organism. What we do is treat a certain system as performing computations, in which computation is the disciplined transition between inner states, formally specified. We then define an interpretation function which ‘maps’ the inner states onto contents. This approach agrees with Fodor’s claim that there is no computation without representation. But this does not mean that we need to give a reductive account of what a representation

Function relating entities represented

I(S)

I(S*)II

(Interpretation function)

Function relating

representing states

Computation

S

S*

(Interpretation function)

Figure 5.1 Cummins’s ‘Tower Bridge’ picture of computation. The upper span pictures the function whose arguments and values are the entities represented. The lower ‘span’ pictures the function whose arguments and values are states of the mechanism, S, and S*. I, the interpretation function, maps the states of the mechanism onto the entities represented. I(S)’ can be read: ‘the entity represented by state S under interpretation I’. For example, treat the entities represented as numbers and the mechanism as an adding machine. The function on the top span is addition. The function I maps states of the machine (button pressings, displays, etc.) onto numbers. A computation of the addition function is a causal transition among the states of the machine that mirrors the ‘transition’ among numbers in addition.

is. Representation is just another concept in the theory; it does not need external philosophical defence and reduction. This is why I call this approach ‘non-reductive’.

An analogy may help to show how representation figures in the computational theory on this account.41 When we measure weight, for example, we use numbers to pick out the weights of objects, in accord with a certain unit of measurement. We use the number

2.2 to pick out the weight (in pounds) of a standard bag of sugar. Having picked out a weight by ‘mapping’ it on to an number, we can see that arithmetical operations on numbers ‘mirror’ physical relations between specific weights. So, for example, if we know that a bag of sugar weighs 2.2 pounds, we only need to know elemen- tary arithmetic to know that two such bags of sugar will weigh 4.4 pounds, and so on.

Analogously, when we ‘measure’ a person’s thoughts, we use sen- tences to pick out these thoughts their beliefs, desires and so on. We use the sentence ‘The man who broke the bank at Monte Carlo died in misery’ to pick out someone’s belief that the man who broke the bank at Monte Carlo died in misery. Having picked out the belief

by ‘mapping’ it on to a sentence, we can see that logical relations between sentences ‘mirror’ psychological relations between specific beliefs. So, for example, if we know that Vladimir believes that the man who broke the bank at Monte Carlo died in misery, we need only elementary logic to know that Vladimir believes that someone died in misery, and so on.

Or so the story goes – the analogy raises many complicated is- sues. (Remember, for instance, the question discussed in Chapter 4 of whether logic can really provide a description of human thought processes.) But the point of employing the analogy here is just to illustrate how concrete states might be mapped onto apparently ‘abstract’ entities such as numbers or sentences, and how the behav- iour of these abstract entities mirrors certain interesting relations between the states. The analogy also illustrates how the theory can permit itself to be non-reductive: just as the question does not arise of how we ‘reduce’ an object’s relation to a number which picks out its weight, neither should the question arise about how we reduce a person’s relation to the sentences which express the contents of their thoughts.

Two features of the weight case shown above are worth not- ing. First, there must be an independent way of characterising the weights of objects, apart from that in terms of numbers. Think of old-fashioned kitchen scales, on which something’s weight is measured by simply comparing it to other weights. Numbers need not be used.

Secondly, we have to accept that there is no unique number which measures the weight of an object. For which number is used to measure weight is relative to the unit of measurement chosen. The weight of our bag of sugar is 2.2 pounds, but it is also 1 kilogram. There is no limit in principle to the numbers which can be used to measure our bag of sugar – so we cannot talk about ‘the’ number which expresses its weight.

Do these features carry over to the analogous case of mental representation? The first feature should carry over uncontroversially for those who accept a computational theory of cognition. For they will accept that the mental states that participate in computations

do have a formal description which is not given in terms of the sentences which express their contents.

The second feature is a little more problematic. For, in the case of a belief, for example, we have a strong conviction that there is a unique sentence which expresses its content. The content of a belief is what makes it the belief it is – so surely a belief’s content is es- sential to it. If the belief that snow is white had a different content (say, grass is green) then surely it would be a different belief. But, if the analogy with numbers is to work, then there must be many different sentences which pick out the same belief state. Which sentence, then, expresses the content of the belief?

The obvious way around this is to say that the content of the be- lief is expressed by all those sentences with the same meaning. The belief that snow is white, for example, can be picked out by using the English sentence, ‘Snow is white’, the Italian sentence ‘La neve è bianca’, the German sentence ‘Schnee ist weiss’, or the Hungarian ‘A fehér’ and so on.42 These sentences are intertranslatable; they all mean the same thing. It is this meaning, rather than the sentences which have the meaning, which is the content of the belief. So the idea that each belief has a unique content which is essential to it is preserved.

However, it could be said that, while this approach may work straightforwardly for states like belief, there is no need to apply it to the sorts of states postulated by a computational theory of mind (e.g. a computational theory of vision).43 For, from the view of computation defended by the non-reductive approach, we should abandon the idea that all mental states have unique contents which are essential to them.44 The reason, essentially, is that an interpreta- tion function is just a mapping of the inner states onto an abstract structure that ‘preserves’ the structure of the inner states. And there are many mappings that will do this. That is, there are many interpretation functions that will assign distinct interpretations to the symbols which one we choose is determined not by the elusive ‘unique content’ of the state, but by which interpretation gives the theory more explanatory power.

It could be objected that this kind of approach makes the nature

of representation and computation too dependent on the decisions of human theorists. For I’ve just been talking about ‘treating’ the states of the system as representations, and of ‘specifying’ mappings from states to contents, ‘assigning’ interpretations to states, and so forth. It could be objected that whether an organism performs computations or not is a matter of objective fact, not of our specifi- cations or assignments.

But this criticism is misplaced. For, while the application of a theory to an organism is clearly a matter of human decision, whether this application correctly characterises the organism is not. The question is: are any of the organism’s cognitive processes cor- rectly characterisable as computations? To test a hypothesis about the computational character of an organism’s processes, we have to interpret the elements in that process. But this no more makes the existence of the process a matter of human decision than the fact that we can pick out and label the physical forces acting individu- ally on a body, and so calculate the net force, makes this physical interaction a matter of human decision.

To sum up: the non-reductive answer to the question, ‘What is a mental representation?’ would be given by listing the ways in which the concept of representation figures in the theory. Those states of an organism which are interpretable as instantiating the stages in the computation of a cognitive function are representations. This account, plus the general theory of computation, tells us all we need to know about the nature of mental representations. The hard tasks are now ahead of us: finding out which systems to treat as compu- tational, and finding out which computations they perform.

The appeal of this non-reductive theory of representation is that it can say many of the things that the reductive theory wants to say about the computational structure of states of mind, without having to provide a definitional reduction of the notion of representation, and so without having to deal with the intractable problems of error. The price that is paid for this is allowing the idea that computational mental states do not have unique contents which are essential to them.

But why should this be a problem? Partly because it seems so

obvious to us that our thoughts do have unique contents. It is obvi- ous to me that my current belief that it is now raining, for example, just could not have another content without being a different belief. However, it can be responded that this appeal to how our minds seem to us is, strictly speaking, irrelevant to the computational theory of mind. For that theory deals with the unconscious mecha- nisms of thought and thought-processes; it is not directly answer- able to introspection, to how our thoughts strike us. After all, our thoughts do not strike us as computational – except perhaps when we are consciously working our way through an explicit algorithm – but no-one would think that this is an adequate objection to the computational theory of cognition.

There is a tension, then, between how our thoughts seem to us, and certain things that the computational theory of cognition says about them. The significance of this tension will be discussed further in Chapter 6.

Conclusion: can representation be reductively explained?

Philosophical attempts to explain the notion of representation by reducing it have not been conspicuously successful. They all have trouble with the problems of error. This is unsurprising: the idea of error and the idea of representation go hand in hand. To represent the world as being a certain way is implicitly to allow a gap between how the representation says the world is, and how the world actually is. But this is just to allow the possibility of error. So any reduction which captures the essence of representation must capture whatever it is that allows for this possibility. This is why the possibility of error can never be a side issue for a reductive theory of representation.

But there is a further problem. Reductive theories of representa- tion have to be able to account for all kinds of mental content, not just the simple kinds connected with (say) food and reproduction. But they have as yet provided no account of how to do this. So a certain degree of scepticism seems advisable.

While both these problems are avoided by the non-reductive

theory I described at the end of the chapter, this theory embraces the consequence that many of our mental states will not be assigned unique contents. But the idea that our mental states have unique contents seems to be essential to representational mental states as we ordinarily understand them. So, even understanding the com- putational theory of cognition in this non-reductive way, we start to depart from the ordinary notion of cognition and thought. The question of the extent to which is this acceptable will be addressed in the next chapter.

Further reading

A good place to go from here is Robert Cummins’s Meaning and Mental Representation (Cambridge, Mass.: MIT Press 1989), which contains an ex- cellent critical survey of the main naturalistic theories of mental representa- tion which were popular in the 1980s (and, interestingly, not much changed in the 1990s). The most useful anthology is Mental Representation, edited by Stephen Stich and Ted Warfield (Oxford: Blackwell 1994). An innovatory large-scale attempt to defend a causal theory of mental representation is Fred Dretske’s Knowledge and the Flow of Information (Cambridge, Mass.: MIT Press 1981). A shortened version of some of Dretske’s ideas is his paper ‘The intentionality of cognitive states’ in David Rosenthal (ed.) The Nature of Mind (Oxford: Oxford University Press 1991). Dretske responds to the problems of error in his essay, ‘Misrepresentation’ in R. Bogdan (ed.) Belief: Form, Content and Function (Oxford: Oxford University Press 1985). Jerry Fodor’s theory occurs in Chapters 3 and 4 of A Theory of Content and Other Essays (Cambridge, Mass.: MIT Press 1990). A less complex version of Fodor’s theory is Psychosemantics (Cambridge, Mass.: MIT Press 1987), Chapter 4. One approach to naturalising representation that is not discussed here, but would need to be in a broader treatment, is functional role semantics: see Ned Block, ‘Advertisement for a semantics for psychol- ogy’ in Midwest Studies in Philosophy 10 (1986). David Papineau defends his biological/teleological theory of mental representation in Philosophical Naturalism (Oxford: Blackwell 1993), and Ruth Millikan defends a some- what different kind of biological theory in Language, Thought and Other Biological Categories (Cambridge, Mass.: MIT Press 1984). The key text in evolutionary psychology is J.L. Barkow, L. Cosmides and J. Tooby (eds.) The Adapted Mind: Evolutionary Psychology and the Generation of Culture (New

York: Oxford University Press 1992); but, for a more accessible account and synthesis of various areas of cognitive science, see Steven Pinker, How the Mind Works (New York, NY: Norton 1997). The whole approach is attacked vigorously by Fodor in The Mind Doesn’t Work That Way (Cambridge, Mass.: MIT Press 2000). Among anti-naturalist theories of representation (not covered in any detail in this book), John McDowell’s work stands out. See his Mind and World (Cambridge, Mass.: Harvard University Press 1994) and his paper, ‘Singular thought and the extent of inner space’ in Philip Pettit and John McDowell (eds.) Subject, Thought and Context (Oxford: Oxford University Press 1986). This anthology also contains ‘Scientism, mind and meaning’ by Gregory McCulloch, a more accessible introduction to this kind of anti-naturalist approach.

6

Consciousness and the mechanical mind

The story so far

What should we make of the mechanical view of the mind?1 In this book we have considered various ways in which the view has dealt with the phenomenon of mental representation, with our knowledge of the thoughts of others, and how (supplemented by further as- sumptions) it forms the philosophical basis of a computational view of thought. And, in the previous chapter, we looked at the attempts to explain mental representation in other terms, or ‘reduce’ it.

There are many questions unresolved: how adequate is the Theory Theory account of our understanding of others’ thoughts? Do our minds have a connectionist or a classical ‘architecture’, or some combination of the two? Should a theory of mental representation attempt to reduce the contents of mental states to causal patterns of indication and the like, or is a non-reductive approach preferable? On some of these questions e.g. connectionism vs. classicism – not enough is yet known for the sensible response to be other than a cautious open mind. On others – e.g. Theory Theory versus simulation it seems to me that the debate has not yet been sharply enough formulated to know exactly what is at stake. It should be clear, though, that the absence of definite answers here should not give us reason to reject the mechanical view of the mind. For the es- sence of the mechanical view as I have characterised it is very hard to reject. It essentially involves commitment to the overwhelmingly plausible view that the mind is a causal mechanism which has its effects in behaviour. Everything else – computation, Theory Theory, reductive theories of content – is detail.

However, there are philosophers who do reject the view whole- sale, and not because of the inadequacies of the details. They believe that the real problem with the mechanical view of the mind is that

211

Consciousness and the mechanical mind

it distorts or even offers no account of how our minds appear to us. It leaves out what is sometimes called the phenomenology of mind – where ‘phenomenology’ is the theory (‘ology’) of how things seem to us (the ‘phenomena’). These critics object that the mechanical mind leaves out all the facts about how our minds strike us, what it feels like to have a point of view on the world. As far as the mechanical approach to the mind is concerned, they say, this side of having a mind might as well not exist. The mechanical approach treats the mind as ‘a dead phenomenon, a blank agency imprinted with causally efficacious traces of recoverable encounters with bits of the environment’.2 Or, to borrow a striking phrase of Francis Bacon’s, the criticism is that the mechanical approach will ‘buckle and bow the mind unto the nature of things’.3

In fact, something like this is a common element in some of the criticisms of the mechanical mind which we have encountered throughout this book. In Chapter 2, for instance, we saw that the Theory Theory was attacked by simulation theorists for its inad- equate representation of what we do when we interpret others. By ‘what we do when we interpret others’, simulation theorists are talking about how interpretation strikes us. Interpretation does not seem to us like applying a theory it’s much more like an act of imaginative identification. (I do not mean to imply that simulation theorists are necessarily opposed to the whole mechanical picture; but they can be.) Yet why should anyone deny that interpretation sometimes seems to us like this? In particular, why should Theory Theorists deny it? And, if they shouldn’t deny it, then what is the debate supposed to be about? The Theory Theory can reply that the issue is not how interpretation seems to us, but what makes interpretation succeed. The best explanation for the success of interpretation is to postulate tacit or implicit knowledge of a theory of interpretation. Calling this theory ‘tacit’ is partly to indicate that it is not phenomenologically available that is, we can’t necessarily tell by introspecting whether the theory is correct. But, according to the Theory Theory, this is irrelevant.

The same pattern of argument emerged when we looked at Dreyfus’s critique of AI in Chapter 3. Dreyfus argued that thinking

220

cannot be a matter of manipulating representations according to rules. This is because thinking requires ‘know-how’, which cannot be reduced to representations or rules. But part of Dreyfus’s argu- ment for this is phenomenological: thinking does not seem to us like rule-governed symbol manipulation. It wouldn’t be too much of a caricature to represent Dreyfus as saying: ‘Just try it: think about some everyday task, like going to a restaurant, say some task which requires basic cognitive abilities. Then try and figure out which rules you are following, and which “symbols” you are manipulating. You can’t say what they are, except in the most open- ended and imprecise way’.

And, once again, the reply to this kind of objection on behalf of AI and the computational theory of cognition is that Dreyfus misses the point. For the point of the computational hypothesis is to explain the systematic nature of the causal transitions that constitute cognition. The computational processes that the theory postulates are not supposed to be accessible to introspection. So it cannot be an objection to the computational theory to say that we cannot introspect them.

In a number of debates, then, there seems to be a general kind of objection to mechanical hypotheses about the mind – that they leave out, ignore or cannot account for facts about how our minds seem to us, about the phenomenology of mind. In response, the mechanical view argues that how our minds seem to us is irrelevant to the mechanical hypothesis in question.4

It must be admitted that there is something unsatisfactory about this response. For the mechanical view cannot deny that there is such a phenomenon as how (our own and others’) minds seem to us. And, what is more, many aspects of the idea of the mechanical mind are motivated by considering how the mind seems to us, in a very general sense of ‘seems’. Consider, for example, the route I took in Chapter 2 from the interpretation of other minds to the hypothesis that thoughts are inner causal mechanisms, the springs of action. This is a fairly standard way of motivating the causal picture of thoughts, and its starting-points are common-sense observations about how we use conjectures about people’s minds to explain their

behaviour. Another example is Fodor’s appeal to the systematic na- ture of thought in order to motivate the Mentalese hypothesis. The examples that Fodor typically uses concern ordinary beliefs, as con- ceived by common sense: if someone believes that Anthony loves Cleopatra, then they must ipso facto have the conceptual resources to (at least) entertain the thought that Cleopatra loves Anthony. The starting points in many arguments for aspects of the mechanical mind are common-sense observations about how minds strike us. So it would be disingenuous for defenders of the mechanical mind to say that they have no interest at all in how minds seem to us.

The worry here is that, although it may start off in common- sense facts about how minds strike us, the mechanical view of the mind ends up saying things which seem to ignore how minds strike us, and thus depart from its starting point in common sense. What is the basis of this scepticism about the mechanical mind? Is it just that no defender of the view has yet come up with an account of the phenomenology of the mind? Or is there some deeper, more principled, objection to the mechanical mind which derives from phenomenology, which shows why the mechanical picture must be incorrect? In Chapter 5, we saw that many suppose that the norma- tivity of the mental is one reason why a general reduction of mental representation must fail. The idea is that the facts that thought is true or false, correct or incorrect, that reasoning is sound or unsound, are all supposed to prevent an explanation of mental content in purely causal terms. But I argued that a conceptual reduction of mental content may not be essential to the mechanical picture of the mind. Representation may have to be considered a basic or fundamental concept in the theory of mind, without any further analysis. If this is true, then normativity is a basic or fundamental concept in the theory of mind too, because the idea of representation essentially carries with it the idea of correctness and incorrectness. But we saw no reason in this to deny that the underlying mechanisms of mental representation are causal in nature, and therefore no reason to deny the mechanical picture wholesale.

But there is another area in the investigation of the mind in which general arguments have been put forward that no causal

or mechanical picture of the mind can possibly give an adequate account of the phenomena of mind. This is the investigation into consciousness, postponed since Chapter 1. It is often said that consciousness is what presents the biggest obstacle to a scientific account of the mind. Our task in this chapter is to understand what this obstacle is supposed to be.

Consciousness, ‘what it’s like’ and qualia

Consciousness is at once the most obvious feature of mental life and one of the hardest to define or characterise. In a way, of course, we don’t need to define it. In everyday life, we have no difficulty employing the notion of consciousness as when the doctor asks whether the patient has lost consciousness, or when we wonder whether a lobster is conscious in any way when it is thrown alive into a pan of boiling water. We may not have any infallible tests which will establish whether a creature is conscious or not; but it seems that we have no difficulty deciding what is at issue when trying to establish this.

Or at least, we have no difficulty deciding what is at issue as long as we don’t try and reflect on what is going on. In considering the question, ‘What is time?’, Saint Augustine famously remarked that when no-one asks him, he knows well enough, but if someone were to ask him, then he does not know how to answer. The situation seems the same with ‘What is consciousness?’. We are perfectly at home with the distinction between the conscious and the non-con- scious when we apply it in ordinary life; but when we ask ourselves the question, ‘What is consciousness?’, we are stuck for an answer. How should we proceed?

Well, what is the everyday distinction between the conscious and the non-conscious? We attribute consciousness to creatures, living organisms, and also to states of mind. People and animals are conscious; but so also are their sensations and (some of) their thoughts. The first use of the concept of consciousness has been called ‘creature consciousness’ and the second use ‘state conscious- ness’.5 Creature consciousness and state consciousness are obviously

interdependent: if a creature is conscious, that is when it is in con- scious states of mind; and conscious states of mind are ipso facto the states of a conscious creature. There is no reason to suppose that we should define the one idea in terms of the other. But, nonetheless, it is perhaps easier to start our exploration of consciousness by con- sidering what it is for a creature to be conscious. Thomas Nagel gave philosophers a vivid way of talking about the distinction between conscious and non-conscious creatures: a creature is conscious, he said, when there is something it is like to be that creature.6 There is nothing it is like to be a bacterium, nothing it is like to be a piece of cheese but something it is like to be a dog or a human being or (to use Nagel’s famous example) a bat. This ‘what it is like’ idiom can be easily transferred to state consciousness too: there is something it is like to be tasting (to be in the state of tasting) vanilla ice-cream or to be smelling (to be in the state of smelling) burning rubber. That is, there is something it is like to be in these states of mind. But there is nothing it is like to be largely composed of water, or to have high blood pressure. These are not states of mind.

The phrase ‘what it is like’ is not supposed to be a definition of consciousness. But, as I have said already, we are not looking for a definition here. No-one lacking the concept of consciousness (if such a person were possible) would be able to grasp it by being told that there is something it is like to be conscious, or to be in conscious states. But we can say a couple of things about the meaning of this phrase which help to clarify its role in discussions of consciousness. First, the phrase is not intended in a comparative way. One might ask: what is Vegemite like? And the answer could be given: it’s like Marmite. (For the uninitiated, Vegemite and Marmite are wonderful yeast-based condiments, the first from Australia, the second from the UK.) Here, asking what something is like is asking what things are like it; that is, what things resemble it. This is not the sense of ‘what it’s like’ that Nagel intended when he said that there is something it is like to be a bat. Second, the phrase is not intended simply to mean what it feels like, if ‘feels’ has its normal meaning. For there are some states of mind where it makes sense to say that there is something it is like to be in these states, even

though this does not involve feeling in any ordinary sense. Consider the process of thinking through some problem, trying to understand some difficult task, in your head. There is, intuitively, something it is like to be thinking through this problem; but it need not ‘feel’ like anything. There need be no special feelings or sensations involved. So, although there is something it is like to feel a sensation, not all cases where there is something it is like are cases of feelings.

‘What it is like’, then, does not mean what it resembles and it does not (just) mean what it feels like. What it is trying to express is how things seem to us when we are conscious, or in conscious states, what I called in the previous section the appearance or the phenomena of mind. This is supposed to be different from merely being the kind of creature which has a mind: What it is to be a bat is one thing; what it is like to be a bat is another. Now, the term ‘phenomenal consciousness’ is sometimes used for this idea of how things seem to a conscious creature; and the term is etymologically apt, given that the English word ‘phenomenon’ is derived from the Greek word for appearance. A creature is phenomenally conscious when there is something it is like to be that creature; a state of mind is phenomenally conscious when there is something it is like to be in that state. The special way a state of mind is, what constitutes what it is like to be in that state, is likewise called the phenomenal character of the state.

Sometimes phenomenal consciousness is described in terms of qualia (we first encountered qualia in Chapter 1, ‘Brentano’s thesis’). Qualia (plural: the singular is quale) are supposed to be the non-representational, non-intentional, yet phenomenally conscious properties of states of mind.7 Believers in qualia say that the particular character of the aroma of smelling coffee cannot just be captured in terms of the way the smell represents coffee; this would fail to capture the way it feels to smell coffee. Even when you have described all the ways your experience of the smell of coffee represents coffee, you will have left something out: that is the qualia of the experience of smelling coffee, the intrinsic properties of the experience, which are independent of the representation of coffee. Someone who believes in qualia denies Brentano’s thesis that all

mental phenomena are intentional: certain conscious properties of states of mind are not intentional at all. And these are supposed to be the properties which are so hard to make sense of from a natu- ralistic point of view. Hence the problem of consciousness is often called the ‘problem of qualia’.8

But, though it is not controversial that there is such a thing as phenomenal consciousness, it is controversial that there are qualia. Some philosophers deny that there are any qualia, and by this they do not mean that there is no phenomenal consciousness.9 What they mean is that there is nothing to phenomenal consciousness over and above the representational properties of states of mind. In the case of visual perception, for example, these philosophers known as intentionalists or representationalists – say that when I perceive something blue I am not aware of some intrinsic property of my state of mind, in addition to the blueness which I perceive. I look at a blue wall, and all I am aware of is the wall and its blueness. I am not, in addition, aware of some intrinsic properties of my state of mind.10 And this view says similar things about sensation. The believer in qualia says that, in such a case, one is also aware of what Ned Block has called ‘mental paint’: the intrinsic properties of one’s state of mind.

Things can become confusing here because other philosophers use the word ‘qualia’ simply as a synonym for ‘phenomenal charac- ter’ – so that to have phenomenal consciousness is, as a matter of definition, to have qualia. This is very unhelpful because it makes it impossible to understand what philosophers such as Tye and Dennett could possibly mean when they deny that there are qualia. To make a first attempt at clarifying matters here, we must distinguish two ways of using the term ‘qualia’: (i) to have qualia is simply to have experience with a phenomenal character; or (ii) qualia are non-in- tentional (non-representational) qualities of experience.

The debate about consciousness involves, it seems, a large amount of terminological confusion. We need to make a broad distinction between phenomenal consciousness the thing to be explained – and those properties that are appealed to in order to explain phe- nomenal consciousness. Unless we do this we will not understand

what it is that philosophers are doing when they deny the existence of qualia. Superficially, it might look as if they are rejecting the phenomena of consciousness, whereas what they are really reject- ing is a certain way of explaining phenomenal consciousness: in terms of qualia, non-intentional, non-representational properties of mental states.

These clarifications made, we must finally turn to an overdue topic, the mind–body problem.

Consciousness and physicalism

In Chapter 2 (‘The mind–body problem’) I said that the mind–body problem can be expressed in terms of the puzzlement which we feel when trying to understand how a mere piece of matter like the brain can be the source of something like consciousness. On the one hand, we feel that our consciousness must just be based on matter; but, on the other hand, we find it impossible to understand how this can be so. This is certainly what makes many people think that consciousness is mysterious; but, by itself, it is not a precise enough thought to give rise to a philosophical problem. Suppose someone were to look at a plant, and having found out about the processes of photosynthesis and cellular growth in plants, still found it incredible that plants could grow only with the help of sun, water and soil. Tough. No interesting philosophical consequences should be drawn from this person’s inability to understand the scientific facts. Of course, life and reproduction can look like remarkable and mysteri- ous phenomena; but the proper response to this is simply to accept that certain phenomena in nature are remarkable and maybe even mysterious. But that doesn’t mean that they cannot be explained by science. The ability of creatures to reproduce themselves is now fairly well understood by scientists; it may be remarkable and mysterious for all that.

To approach the issue in another way, consider the argument that physicalist or materialist views typically give for their view that mental states (both thoughts and conscious states) are identical with states of the brain. In rough outline, they argue, first, that conscious

and other mental states have effects in the physical world (perhaps using the kinds of argument which I used in Chapter 2, ‘The causal picture of thoughts’, p. 54); and, second, that every physical hap- pening is the result of purely physical causes, according to physical law (this is sometimes called ‘the causal closure of the physical’).11 I cannot go into the reasons for this second assumption in any detail here. Let’s just say that physicalists believe that this is the conse- quence of what we have learned from science: science succeeds in its explanatory endeavours by looking for the underlying mecha- nisms for things which happen. And looking for the underlying mechanisms ends up uncovering physical mechanisms the sorts of mechanisms discovered in physics, the science of spacetime, matter and energy. As David Lewis puts it:

[T]here is some unified body of scientific theories of the sort we now accept, which together provide a true and exhaustive account of all physical phenomena. They are unified in that they are cumulative: the theory governing any physical phenomenon is explained by theories governing phenomena out of which that phenomenon is composed and by the way it is composed out of them. The same is true of the latter phenomena, and so on down to fundamental particles or fields governed by a few simple laws, more or less as conceived in present- day theoretical physics.12

It is this kind of thing which grounds physicalists’ confidence in the idea that, ultimately, all physical effects are the result of physi- cal causes. They then conclude that, if mental causes really do have effects in the physical world, then they must themselves be physical. For. if mental causes weren’t physical, then there would be physi- cal effects which are brought about by non-physical causes, which contradicts the second assumption.

This is a quite general argument for identifying mental states with physical states (for example, states of the brain). Call it the ‘causal argument for physicalism’. Although it rests on a scientific or empirical assumption about the causal structure of the physi- cal world, the causal argument for physicalism does not rely on scientists actually having discovered the basis in the brain (what

they tend to call the ‘neural correlate’13) of any particular mental state. Although most physicalists think that such neural correlates will eventually be found, they are not presupposing that they will be found; all they are presupposing in this argument is the causal nature of mental states and the causal closure of the physical world. It follows that one could object to the conclusion of the argument either by objecting to the causal nature of mental states, or by objecting to the causal closure of the physical world, or by saying that there is some confusion or fallacy in moving from these two assumptions to the conclusion that mental states are states of the brain.

But notice that it is not a serious objection to this conclusion just to say: ‘but mental states do not seem to be states of the brain!’. This is, it must be admitted, a very natural thought. For it is true that when one introspects one’s states of mind – in the case of trying to figure out what one is thinking, for example it does not seem as if we are obtaining some sort of direct access to the neurons and synapses of our brains. But, if the argument above is right, then this evidence from introspection is irrelevant. For if it is true that mental states are states of the brain, then it will be true that, as a matter of fact, being a certain brain state will seem to you to be a certain way, although it might not seem to be a brain state. But that’s OK; it can seem to you that George Orwell wrote 1984 without its seeming to you that Eric Blair did, even though, as a matter of fact, Eric Arthur Blair did write 1984. (Logicians will say that ‘it seems to me that …’ is an intensional context: see Chapter 1, ‘Intentionality’, p. 30.) The conclusion of the causal argument for physicalism is that mental states are brain states. To object to this by saying, ‘but surely mental states can’t be brain states, because they don’t seem to be!’ is not to raise a genuine objection: it is just to reject the conclusion of the argument. It is as if someone said, in response to the claim that mat- ter is energy, ‘matter cannot be energy because it does not seem like energy!’. In general, when someone asserts some proposition, P, it is not a real objection to say, ‘but P does not seem to be true; therefore it is not true!’. And the point is not that one might not be correct in denying P. The point is rather that there is a distinction between raising an objection to a thesis and denying the thesis.

So mental states might be brain states, even if they do not seem to be. We can illustrate this in another way, by using a famous story about Wittgenstein. ‘Why did people used to think that the sun went around the earth?’ Wittgenstein once asked. When one of his students replied ‘Because it looks as if the sun goes around the earth’, he answered, ‘And how would it look if the earth went around the sun?’. The answer, of course, is: exactly the same. So we can make a parallel point in the case of mind and brain: why do some people think that mental states are not brain states? Answer: because mental states do not seem like brain states. Response: but how would they seem if they were brain states? And the answer to this, of course, is: exactly the same. Therefore, there is no simple inference from the fact that being in a mental state makes things seem a certain way to any conclusion about whether mental states have a physical nature or not.

No simple inference; but maybe there is a more complicated one concealed inside this (admittedly very natural) objection. Some philosophers think so; and they think that it is consciousness which really causes the difficulty for physicalism (and, as we shall see, for the mechanical mind too). There are various versions of this problem of consciousness for physicalism. Here I will try and extract the es- sence of the problem; the Further reading section (pp. 231–232) will indicate ways in which the reader can explore it further.

The essence of the problem of consciousness derives from the apparent fact that any physicalist description of conscious states seems to be, in Nagel’s words, ‘logically compatible with the ab- sence of consciousness’. The point can be made by comparison with other cases of scientific identifications identifications of everyday phenomena with entities described in scientific language. Consider, for example, the identification of water with H2O. Chemistry has discovered that the stuff that we call ‘water’ is made up of molecules which are themselves made up of atoms of hydrogen and oxygen. There is nothing more to being water than being made up of H2O molecules; this is why we say that water is (i.e. is identical with) H2O. Given this, then, it is not logically possible for H2O to exist and water not to exist; after all, they are the same thing! Asking whether

there could be water without H2O is like asking whether there could be George Orwell without Eric Arthur Blair. Of course not; they are the same thing.

If a conscious mental state for example, a headache were really identical with a brain state (call it ‘B’ for simplicity), then it would in a similar way be impossible for B to exist and for the head- ache not to exist. For, after all, they are supposed to be the same thing. But this case does seem to be different from the case of water and H2O. For whereas the existence of water without H2O seems absolutely impossible, the existence of B without the headache does seem to be possible. Why? The short answer is: because we can coherently conceive or imagine B existing without the headache existing. We can conceive, it seems, a creature who is in all the same brain states as I am in when I have a headache but who in fact does not have a headache. Imaginary creatures like this are known in the philosophical literature as ‘zombies’: a zombie is a physical replica of a conscious creature who is not actually conscious.14 The basic idea behind the zombie thought-experiment is that, although it does not seem possible to have H2O without water, it does seem possible (because of the possibility of zombies) to have a brain state without a conscious state; so consciousness cannot be identical with or constituted by any brain states.

This seems like a very fast way to refute physicalism! However, although it is very controversial, the argument (when spelled out clearly) does not involve any obvious fallacy. So let’s spell it out more slowly and clearly. The first premise is:

If zombies are possible, then physicalism is false.

As we saw in Chapter 1, physicalism has been defined in many ways. But here we will just take it to be the view that is the conclusion of the causal argument above: mental states (includ- ing conscious and unconscious states) are identical with states of the brain. The argument against physicalism is not substantially changed, however, if we say that, instead of being identical with states of the brain, mental states are exhaustively constituted by

states of the brain. Identity and constitution are different relations, as identity is symmetrical where constitution is not (see Chapter 1: ‘Pictures and resemblance’, p. 13, for this feature of relations). If Orwell is identical with Blair, then Blair is identical with Orwell. But if a parliament is constituted by its members, then it does not follow that the members are constituted by parliament. Now, one could say that states of consciousness are constituted by states of the brain, or one could say that they are identical with states of the brain. Either way, the first premise does seem to be true. For both ideas are ways of expressing the idea that conscious states are nothing over and above states of the brain. Putting it metaphorically, the basic idea is that, according to physicalism, all God needs to do to create my conscious states is to create my physical brain. God does not need to add anything else. So, if it could be shown that creating my brain is not enough to create my states of consciousness, then physical- ism would be false. Showing that zombies are possible is a way of showing that creating my brain is not enough to create my states of consciousness. This is why premise 1 is true.

The next premise is:

Zombies are conceivable (or imaginable).

What this means is that we can coherently imagine a physical replica of a conscious being (e.g. me) without any consciousness at all. This zombie-me would have all the same physical states as me, the same external appearance, and the same brain and so on. But he would not be conscious: he would have no sensations, no percep- tions, no thoughts, no imagination, nothing. Perhaps we can allow him to have all sorts of unconscious mental states (the sort described in Chapter 1, ‘Thought and consciousness’, p. 26). But what he has nothing of is consciousness of any kind. Obviously, when we are imagining the zombie, we are imagining it from the ‘outside’; we cannot imagine it from the ‘inside’, from the zombie’s own point of view. For there is, of course, no such thing as the zombie’s point of view.

Let’s just be clear about what premise 2 says. If someone asserts

premise 2, they are not saying that there really are any zombies, or that for all I know, you might all be zombies, or that they are pos- sible in any realistic or scientific sense. Not at all. One can deny out- right that there are any zombies, deny that I have any doubts about whether you are conscious, and deny that there could be, consistent with the laws of nature as we know them, any such things – and one can still hold premise 3. Premise 3 asserts the mere, bare possibility of physical replicas who are not conscious.

There is no obvious contradiction in stating the zombie hypoth- esis. But maybe there is an unobvious one, something hidden in the assumptions we are making, which shows why premise 2 is really false. Perhaps we are merely thinking that we are imagining the zombie, but we aren’t really coherently imagining anything. It can happen that someone tries to imagine something, and seems to imagine it, but does not really succeed in imagining precisely that thing because it is not really possible. I might, for example, try and imagine being my brother. I think I can imagine this, living where he is living, doing what he is doing. But of course I cannot literally be my brother: no-one can literally be identical with someone else. This is impossible. So maybe I am failing to imagine literally being my brother, and really imagining something else. Maybe what I am really imagining is me, myself, living a life rather like my brother’s life. We can say a similar thing about the parallel case of water and H2O: someone might think that they can imagine water not being H2O, but having some other chemical structure. But, arguably, they are not really imagining this, but rather imagining something that looks just like water, but isn’t water (as water is, by hypothesis,

H2O).15 So someone can fail to imagine something because it is impossible: premise 2 might be false.

There is, however, another way of criticising the argument: we could agree that my being my brother is impossible; but all this shows is that one can imagine impossible things. In other words, we could accept the first two premises in this argument, but reject the move from there to the next premise:

Zombies are possible.

Obviously, 3 and premise 1 imply the conclusion:

Physicalism is false.

So anyone who wants to defend physicalism should concentrate on the key point in the argument, the move from premise 2 to premise 3. How is this move supposed to go? Premise 2 is supposed to provide the reason to believe in premise 3. The argument says that we should believe in premise 3 because of the truth of premise

2. Notice that it is one thing to say that if X is conceivable then X is possible, and quite another to say that being conceivable is the same thing as being possible. This is implausible. Some things may be imaginable without being really possible (e.g. someone might imagine a counterexample to a law of logic), and some things are possible without being imaginable (for example, for myself, I find it impossible to imagine or visualize curved spacetime). Imaginability and possibility are not the same thing. But they are related, ac- cording to this argument: imaginability is the best evidence there is for something’s being possible. Rather as perception stands to what is real, so imagination stands to what is possible. Perceiving something is good evidence that it is real; imagining something is good evidence that something is possible. But the real is not just the perceivable, just as the possible is not just the imaginable.

The physicalist will respond to this that while it may be true in general that the imagination is a good guide to possibility, it is not infallible, and it can lead us astray (remember the Churchlands’ example of the luminous room in Chapter 3, ‘The Chinese Room’, p.

123). And they would then argue that the debate about conscious- ness and zombies is an area where it does lead us astray. We imagine something, and we think it possible; but we are misled. Given the independent reasons provided for the truth of physicalism (the causal argument above), we know it cannot be possible. So what we can imagine is, strictly speaking, irrelevant to the truth of physical- ism. That’s what the physicalist should say.

To take stock: there are two ways a physicalist can respond to the zombie argument. The first is to deny premise 2 and show that

zombies are not coherently conceivable. The second is to accept 2 and reject the move from 2 to 3. So, for the physicalist, either zombies are inconceivable and impossible, or they are conceivable but impossible. It seems to me that the second line of attack is less plausible: for if physicalists agree that, in some cases, imaginability is a good guide to possibility, then what is wrong with this particular case? Physicalists would be better off taking the first move, and at- tempt to deny that zombies are really, genuinely conceivable. They have to find some hidden confusion or incoherence in the zombie story. My own view is that there is no such incoherence; but the issues here are very complicated.

The limits of scientific knowledge

But suppose that the physicalist can show that there is a hid- den confusion in the zombie story maybe zombies are kind of conceivable, but not really possible. So the link between the brain and consciousness is necessary, appearances to the contrary. Still physicalism is not home and dry. For there are arguments, related to the zombie argument, which aim to show that, even if this were the case, physicalism would still have an epistemological shortcom- ing: there would nonetheless be things which physicalism could not explain. Even if physicalism were metaphysically correct – correct in the general claims it makes about the world – its account of our knowledge of the world will be necessarily incomplete.

The easiest way to see this is to outline briefly a famous argu- ment, expressed in the most rigorous form in recent years by Frank Jackson: he called it ‘the knowledge argument’. 16 Let’s put the argu- ment this way. First, imagine that Louis is a brilliant scientist who is an absolute expert on the physics, physiology and psychology of taste, and on all the scientific facts about the making of wine, but has never actually tasted wine. Then one day Louis tastes some wine for the first time. ‘Amazing!’ he says, ‘so this is what Chateau Latour tastes like! Now I know.’

This little story can then provide the basis of an argument with two premises:

Before he tasted wine, Louis knew all the physical, physiologi- cal, psychological and enological facts about wine and tasting wine.

After he tasted wine, he learned something new: what wine tastes like.

Conclusion: Therefore, not everything that there is to know about tasting wine is something physical. There must therefore be non-physical things to learn about wine: viz. what it tastes like.

The argument is intriguing. For, if we accept the coherence of the imaginary story of Louis, then the premises seem to be very plausi- ble. But the conclusion does seem to follow, fairly straightforwardly, from the premises. For if Louis did learn something new then there must be something that he learned. You can’t learn without learning something. And, because he already knew all the physical things that there are to know about wine and wine-tasting, the new thing he learns cannot be something physical. But if this is true then it must be that not everything we can know falls within the domain of physics. And not just physics: any science whatsoever that one could learn without having the experiences described by that sci- ence. Jackson concluded that physicalism is false: not everything is physical. But is this right?

The argument is very controversial, and has inspired many criti- cal responses. Some people don’t like thought-experiments like the story of Louis.17 But it’s really hard to see what could possibly be wrong with the idea that, when someone drinks wine for the first time, they come to learn something new: they learn what it tastes like. So, if we were going to find something wrong with the story itself, it would have to be with the idea that someone could know all the physical facts about wine and wine tasting. True enough, it is hard to imagine what it would be to learn all these facts. As Dennett says, you don’t imagine someone having all the money in the world by imagining them being very rich.18 Well, yes; but if you really do want to imagine someone having all the money in the world, you surely wouldn’t go far wrong if you started off imagining

them being very very rich and then more so, without ever having to imagine them having more of anything of a different kind, just more of the same: money. And likewise with scientific knowledge: we don’t have to imagine Louis having anything of a very different kind from the kind of scientific knowledge that people have today: just more of the same.

The standard physicalist response to the argument is rather that it doesn’t show that there are any non-physical entities in the world. It just shows that there is non-physical knowledge of those entities. The objects of Louis’s knowledge, the physicalist argues, are all perfectly ordinary physical things: the wine is made up of alcohol, acid, sugar and other ordinary physical constituents. And we have not been shown anything which shows that the change in Louis’s subjective state is anything more than a change in the neu- rochemistry of his brain. Nothing in the argument, the physicalist claims, shows that there are any non-physical objects or properties, in Louis’s brain or outside it. But they do concede that there is a change in Louis’s state of knowledge: he knows something he did not know before. However, all this means is that states of knowledge are more numerous than the entities of which they are knowledge. (Just as we can know the same man as Orwell and come to know something new when we learn he is Blair.)

But this is not such a happy resting place for physicalists as they might think. For what this response concedes is that there are, in principle, limits to the kind of thing which physical science can tell us. Science can tell us about the chemical constitution of wine; but it can’t tell us what wine tastes like. Physicalists might say that this is not a big deal; but, if they do say this, they have to give up the idea that physics (or science in general) might be able to state every truth about the world, independently of the experiences and perspectives of conscious, thinking beings. For there are truths about what wine tastes like, and these are the kind of truths you can only learn having tasted wine. These are truths which Louis would not have learned before tasting wine, I believe, no matter how much science he knew. So there are limits to what science can teach us though this is a conclusion which will only be surprising or

disturbing to those who thought that science could tell us every- thing in the first place.

So let’s return finally to the mind–body problem. Contrary to what we might have initially thought, the problem can now be clearly and precisely formulated. The form of the problem is that of a dilemma. The first horn of the dilemma concerns mental causa- tion: if the mind is not a physical thing, then how can we make sense of its causal interactions in the physical world? The causal argument for physicalism says that we must therefore conclude that the mind is identical with a physical thing. But the second horn of the dilemma is that, if the mind is a physical thing, how can we explain consciousness? Expressed in terms of the knowledge argu- ment: how can we explain what it feels like to taste something, even if tasting something is a purely physical phenomenon? Causation drives towards physicalism, but consciousness drives us away from it.

Conclusion: what do the problems of consciousness tell us about the mechanical mind?

What does the mind–body problem have to do with the mechanical mind? The mechanical view of the mind is a causal view of mind; but it is not necessarily physicalist. So an attack on physicalism is not necessarily an attack on the mechanical mind. The heart of the mechanical view of the mind is the idea that the mind is a causal mechanism which has its effects in behaviour. Mental representa- tion undoubtedly has causal powers, as we saw in Chapter 2, so this relates the mechanical mind directly to the mind–body problem. We have found no good reason, in our investigations in this book, to undermine this view of representation as causally potent. But the mechanical view still has to engage with the causal argument for physicalism outlined in this chapter; and, if a physicalist solution is recommended, the view has to say something about the arguments from consciousness which form the other half of the dilemma which is the mind–body problem. Given the close inter-relations between thought and consciousness, the question of consciousness cannot be

ignored by a defender of the mechanical mind. (Fodor, characteristi- cally, disagrees: ‘I try never to think about consciousness. Or even to write about it.’19) The positive conclusion is that we have unearthed no powerful argument against the view that the mind is a causal mechanism which has its effects in behaviour.

Nonetheless, our investigations into the mechanical mind have also yielded one broad and negative conclusion: there seems to be a limit to the ways in which we can give reductive explanations of the distinctive features of the mind. We found in Chapter 3 that, although there are interesting connections between the ideas of computation and mental representation, there is no good reason to suppose that something could think simply by being a computer: reasoning is not just reckoning. In Chapter 4, we examined the Mentalese hypothesis as an account of the underlying mechanisms of thought; but this hypothesis does not reductively explain mental representation, but takes it for granted. The attempts to explain rep- resentation in non-mental terms examined in Chapter 5 foundered on some fundamental problems about misrepresentation and com- plexity. And, finally, in the present chapter, we have seen that, even if the attacks on physicalism from the ‘conceivability’ arguments are unsuccessful, they have variants which show that there are funda- mental limits to our scientific knowledge of the world. Perhaps the proper lesson should be that we should try and be content with an understanding of mental concepts representation, intentionality, thought and consciousness which deals with them in their own terms, and does not try and give reductive accounts of them in terms of other sciences. And perhaps this is a conclusion which, in some sense, we already knew. Science, Einstein is supposed to have remarked, cannot give us the taste of chicken soup. But when you think about it – wouldn’t it be weird if it did?

Further reading

An excellent collection of essays on the philosophy of consciousness is The Nature of Consciousness edited by Ned Block, Owen Flanagan and Güven Güzeldere (Cambridge, Mass.: MIT Press 1997). This contains Thomas Nagel’s

classic paper, ‘What is it like to be a bat?’, Colin McGinn’s ‘Can we solve the mind–body problem?’, Jackson’s ‘Epiphenomenal qualia’, Block’s ‘On a confusion about a function of consciousness’ and many others. See also Conscious Experience edited by Thomas Metzinger (Paderborn: Schöningh 1995). Much of the agenda in recent philosophy of consciousness has been set by David Chalmers’s ambitious and rigorous The Conscious Mind (New York, NY and Oxford: Oxford University Press 1996). Joseph Levine’s Purple Haze (New York, NY and Oxford: Oxford University Press 2001) gives a very clear, though ultimately pessimistic, account of the problem of conscious- ness for materialism, in terms of what Levine has christened the ‘explana- tory gap’. David Papineau’s Thinking About Consciousness (Oxford: Oxford University Press 2002) is a very good defence of the view that the problems for physicalism lie in our concepts rather than in the substance of the world. On the debate over intentionality and qualia, Michael Tye’s Ten Problems of Consciousness (Cambridge, Mass.: MIT Press 1995) is a good place to start. Daniel Dennett’s Consciousness Explained (London: Allen Lane 1991) is a philosophical and literary tour de force, the culmination of Dennett’s think- ing on consciousness; controversial and hugely readable, no philosopher of consciousness can afford to ignore it. Gregory McCulloch’s The Life of the Mind (London and New York: Routledge 2003) offers an unorthodox non- reductive perspective on these issues.

Glossary

adaptation A trait of an organism whose nature is explained by natural selection.

algorithm A step-by-step procedure for computing (finding the value of) a function. Also called an ‘effective procedure’ or a ‘mechanical procedure’.

behaviourism In philosophy, the view that mental concepts can be exhaustively analysed in terms of concepts relating to behaviour. In psychology, the view that psychology can only study behaviour, because ‘inner mental states’ either are not scientifically tractable or do not exist.

common-sense psychology Also called ‘folk psychology’; the network of assumptions about mental states that is employed by thinkers in explaining and predicting the behaviour of others.

compositionality The thesis that the semantic (see semantics) and/or syntactic (see syntax) properties of complex linguistic expressions are determined by the semantic and/or syntactic properties of their simpler parts and their mode of combina- tion.

computation The use of an algorithm to calculate the value of a

function.

content A mental state has content (sometimes called ‘intentional content’ or ‘representational content’) when it has some repre- sentational character or intentionality. Content is propositional content when it is assessable as true or false. Thus, the belief that fish swim has propositional content; Anthony’s love for Cleopatra does not.

dualism In general, a doctrine is dualistic when it postulates two fundamental kinds of entity or category. (Sometimes the term is reserved for views according to which these two kinds of entity give rise to a problematic tension; but this is not essential.)

233

Glossary

Substance dualism is the view that reality consists of two fundamental kinds of substance, mental and material substance (this is also called Cartesian dualism, after the Latinised version of Descartes’s surname). Property dualism is the view that there are two fundamental kinds of property in the world, mental and physical.

extension The entity in the world for which an expression stands. Thus, the extension of the name ‘Julius Caesar’ is the man Caesar himself; the extension of the predicate ‘is a man’ is the set of all men.

extensionality A feature of logical languages and linguistic con- texts (parts of a language). A context or language is extensional when the semantic properties (truth and falsity) (see semantics) of sentences in it depend only on the extensions (see extension) of the constituent words, or the truth or falsity of the constituent sentences.

folk psychology See common-sense psychology.

function In mathematics, a mathematical operation that deter- mines an output for a given input (e.g. addition, subtraction); a computable function is one for which there is an algorithm. In biology, the purpose or role or capacity of an organ in the life of the organism (e.g. the function of the heart is to pump blood around the body).

functionalism In the philosophy of mind, the view that mental states are characterised by their causal roles or causal profiles that is, the pattern of inputs and outputs (or typical causes and effects) which are characteristic of that state. Analytic functionalism says that the meanings of the vocabulary of common-sense psychology provides knowledge of these causal roles; psychofunctionalism says that empirical psychology will provide the knowledge of the causal roles.

intensionality A feature of logical or linguistic contexts. A context is intensional when it is not extensional (see extensionality). intentionality The mind’s capacity to direct itself on things, or to

represent the world.

language of thought (LOT) The system of mental representation,

234

hypothesised by Jerry Fodor, to explain reasoning and other mental processes. Fodor calls the system a language because it has syntax and semantics, as with natural language.

materialism Sometimes used as a synonym for physicalism. Otherwise, the view that everything is material, that is, made of matter.

Mentalese See language of thought.

mentalism The general approach in philosophy and psychology, opposed to behaviourism, which asserts the existence of inner mental states and processes which are causally efficacious in producing behaviour.

phenomenal consciousness Conscious experience in the broadest sense. A creature has phenomenal consciousness when there is something it is like to be that creature. A state of mind is phenomenally conscious when there is something it is like to be in that state of mind.

phenomenal character The specific character of a phenomenally conscious experience (see phenomenal consciousness).

phenomenology Literally, a theory of the phenomena or appear- ances. More specifically, the term has been used by Edmund Husserl and his followers for a specific approach to the study of appearances, which involves ‘bracketing’ (i.e. ignoring) questions about the external world when studying mental phenomena.

physicalism The view that either everything is physical or every- thing is determined by the physical. ‘Physical’ here means: the subject matter of physics.

premise In an argument, a premise is a claim from which a con- clusion is drawn, usually along with other premises.

program A set of instructions that a computer uses to compute a given function.

propositional attitude A term invented by Bertrand Russell for those mental states the content of which is true or false, i.e. propositions. Beliefs are the paradigmatic propositional at- titudes.

qualia The term is used in two senses. (i) The broad use has it that qualia are those properties of mental states in virtue of which

they have the phenomenal character they do. (ii) The more narrow use has it that qualia are the non-representational (non- intentional) properties of mental states in virtue of which they have the phenomenal character they do.

semantics Narrowly speaking, a theory that studies the semantic properties of a language or representational system. More gen- erally, those properties themselves: semantic properties are the properties of representations which relate them to the world, or the things they are about. Meaning, reference and truth are the paradigmatic semantic properties.

simulation theory (or simulationism) The view that the practice of common-sense psychology involves primarily a technique of imagining oneself to be in another person’s position, and under- standing their behaviour by using this kind of imaginative act.

syntax Narrowly speaking, a theory that studies the syntactic properties of a language or representational system. More gen- erally, those properties themselves: syntactic properties are the formal properties of representations, which determine whether an expression is well formed.

teleology The theory of goals or purposes, or goal-directed behaviour. A theory (e.g. natural selection) can be a theory of teleology even if it ends up by explaining purposes in terms of simpler causal processes.

Theory Theory The theory that common-sense psychology is somewhat akin to a scientific theory.

Turing machine An abstract specification of a machine, invented by Alan Turing, consisting of an infinite tape with symbols written on it and a device which reads the tape; the device can perform a small number of simple operations: move across the tape; read a symbol on the tape; erase a symbol on the tape. The idea is meant to illustrate the most general features of computa- tion. See Turing’s thesis.

Turing’s thesis The thesis that any computable function can be computed by a Turing machine. Also called the Church–Turing thesis, after Alonzo Church, who put forward some similar ideas.

zombie An imaginary physical replica of a human being who lacks consciousness. Sometimes a zombie is defined as a physi- cal replica of a human being who lacks qualia; but this talk of qualia is not essential to the zombie hypothesis.

The mechanical mind: a chronology

1473 Copernicus challenges the claim that the earth is the centre of the universe

1616 William Harvey explains the circulation of the blood

1632 Galileo publishes his Dialogue on the Two Great Systems of the World

1641 Publication of René Descartes’s Meditations, in which he outlines the principles of his new science

1651 Publication of Thomas Hobbes’s Leviathan, in which he argued for a materialistic and mechanistic conception of human beings

1642 Blaise Pascal invents the first purely mechanical adding machine

1690 John Locke publishes An Essay Concerning Human Understanding

1694 Gottfried Wilhelm Leibniz invents a calculating machine that can also multiply

1748 David Hume publishes An Inquiry Concerning Human Understanding

Julien de la Mettrie publishes L’Homme Machine (Man, the Machine)

1786Luigi Galvani reports the results of stimulating a frog’s muscles by the application of an electric current

1810Franz Josef Gall publishes the first volume of the Anatomy and Physiology of the Nervous System

1820Charles de Colmar invents a machine that can add, subtract, multiply and divide

Joseph-Marie Jacquard invents the ‘Jacquard loom’ for weaving fabric, which uses punched boards which control the patterns to be woven

238

Chronology

1822 Charles Babbage proposes the design of a machine to perform differential equations, which he called the ‘differ- ence engine’. Babbage worked on the difference engine for ten years, after which he started working on his analytical engine, which was (in conception at least) the first general- purpose computer

1854 George Boole publishes The Laws of Thought

1856 Hermann von Helmholtz publishes the first volume of his

Handbook of Physiological Optics

1858 Wilhelm Wundt, often considered one of the founders of scientific psychology, becomes an assistant of Hermann von Helmholtz

1859 Charles Darwin publishes Origin of Species

1873 Wundt publishes Principles of Physiological Psychology

1874 Franz Brentano publishes Psychology from an Empirical Standpoint

1879 Wundt establishes the first psychological laboratory in Leipzig

Gottlob Frege publishes his Begriffsschrift (Concept-script), the work that laid the foundations for modern logic

1883The first laboratory of psychology in America is established at Johns Hopkins University

1886Ernst Mach publishes The Analysis of Sensations

1890William James publishes his Principles of Psychology

1895 Sigmund Freud and Josef Breuer publish Studies on Hysteria, the first work of psychoanalysis

1896 Herman Hollerith (1860–1929), founds the Tabulating Machine Company in 1896 (to become International Business Machines (IBM) in 1924). Using something similar to the Jacquard loom idea, he used a punch-card reader to compute the results of the US census

1899 Aspirin first used to cure headaches

1910 Bertrand Russell and Alfred North Whitehead publish Principia Mathematica, which attempts to explain math- ematics in terms of simple logical notions

239

1913 The behaviourist psychologist J.B. Watson publishes his paper, ‘Psychology as the behaviorist views it’

1923 Jean Piaget publishes The Language and Thought of the Child, a seminal work in developmental psychology

1931 Vannevar Bush develops a calculator for solving differential equations

1932 Kurt Gödel publishes his incompleteness theorems in the foundations of mathematics

1936 Alan Turing publishes his paper ‘On computable numbers’, in which the idea of a Turing machine is outlined

1941 German engineer Konrad Zuse develops a computer to design aeroplanes and missiles

1943 British Intelligence complete a code-breaking computer (‘Colossus’) to decode German military messages

1944 Howard Aitken of Harvard University, working with IBM, produces the first fully electronic calculator: the automatic sequence controlled calculator (known as ‘Mark I’), whose purpose was to create ballistic charts for the US Navy

1945 John von Neumann designs the electronic discrete vari- able automatic computer (EDVAC). EDVAC had a memory which holds a stored program as well as data, and a central processing unit. This ‘von Neumann architecture’ became central in computer design

1946 John Presper Eckert and John W. Mauchly, working at the University of Pennsylvania, build the electronic numerical integrator and calculator (ENIAC). ENIAC was a general- purpose computer which computed at speeds one thousand times faster than Aitken’s Mark I

1948 The invention of the transistor initiates some major changes in the computer’s development. The transistor was being used in computers by 1956

1949 Lithium is used to treat depression

1950 Turing publishes his article ‘Computing machinery and in- telligence’, which describes the ‘Turing test’ for intelligence (‘the imitation game’)

1953 Francis Crick, James Watson and Maurice Wilkins discover the structure of DNA

1957 Noam Chomsky publishes Syntactic Structures, in which he puts forward his view that surface features of language must be understood as the result of underlying operations or transformations.

1958 Jack Kilby, an American engineer, develops the integrated circuit, combining different electronic components on a small silicon disk, and allowing computers to become smaller

1960 Hilary Putnam publishes his defence of functionalism in the philosophy of mind, ‘Minds and machines’

1963 Donald Davidson publishes ‘Actions, reasons and causes’ 1971 The term ‘cognitive science’ introduced by English scientist

C. Longuet-Higgins

1971 The development of the Intel 4004 chip, which locates all the components of a computer (central processing unit, memory, etc.) on a tiny chip

1981 IBM introduces its first personal computer (PC) 1982 Posthumous publication of David Marr’s Vision

1984 Apple introduce its first ‘Macintosh’ computer, using the graphical user interface (mouse, windows, etc.) first devel- oped by Xerox in the 1970s (and, ironically, deemed not commercially viable)

1988 The Human Genome Project established in Washington DC 1997 Gary Kasparov, the chess grandmaster and world champion,

is defeated by ‘Deep Blue’, a chess-playing computer

Notes

The puzzle of representation

Quoted by Peter Burke, The Italian Renaissance (Cambridge: Polity Press 1986), p. 201.

Galileo, The Assayer in Discoveries and Opinions of Galileo, by Stillman Drake (New York, NY: Doubleday 1957) pp. 237–238.

J. de la Mettrie, Man, the Machine (1748, translated by G. Bussey; Chicago, Ill.: Open Court 1912).

Hobbes, Leviathan (1651), Introduction, p. 1.

The quotation from de la Mettrie is from Man, the Machine. The quotation from Vogt is from John Passmore, A Hundred Years of Philosophy (Harmondsworth: Penguin 1968), p. 36.

Quoted by Christopher Longuet-Higgins, ‘The failure of reductionism’ in

C. Longuet Higgins et al., The Nature of Mind (Edinburgh: Edinburgh University Press), p. 16. See also David Charles and Kathleen Lennon (eds.) Reduction, Explanation and Realism (Oxford: Oxford University Press 1991). The term ‘reductionism’ has meant many things in philosophy; for a more detailed account, see my Elements of Mind (Oxford: Oxford University Press 2001), §15.

For arguments in defence of this claim, see Tim Crane and D.H. Mellor, ‘There is no question of physicalism’, Mind 99 (1990), reprinted in D.H. Mellor, Matters of Metaphysics (Cambridge: Cambridge University Press 1991), and Tim Crane, ‘Against physicalism’ in Samuel Guttenplan (ed.) A Companion to the Philosophy of Mind (Oxford: Blackwell 1994).

Wittgenstein, Philosophical Investigations (Oxford; Blackwell 1953), §432.

For the question, for example, of how music can express emotion, see Malcolm Budd’s Music and the Emotions (London: Routledge 1986).

See Nelson Goodman’s Languages of Art (Indianapolis, Ind.: Hackett 1976), Chapter 1.

As Wittgenstein puts it: ‘It is not similarity that makes the picture a portrait (it might be a striking resemblance of one person, and yet be a portrait of someone else it resembles less)’. Philosophical Grammar (Oxford: Blackwell 1974), §V.

Though Goodman argues that it is not even necessary: see Languages of Art, Chapter 1.

242

Notes to pp. 18–35

Philosophical Investigations, p. 54.

This is obviously a very simple way of putting the point. For more on convention, see David Lewis, Convention (Oxford: Blackwell 1969). For scepticism about the role of convention in language, see Donald Davidson, ‘Communication and convention,’ in his Inquiries into Truth and Interpretation (Oxford: Oxford University Press 1984).

John Locke, An Essay Concerning Human Understanding (1689), Book III, Chapter 2, §1.

See George Berkeley’s criticism of Locke’s doctrine of abstract ideas in his

Principles of Human Knowledge (1710).

See for example, Davidson’s attempt to elucidate linguistic meaning in terms of truth: Inquiries into Truth and Interpretation (Oxford: Oxford University Press 1984). For a survey, see Barry C. Smith, ‘Understanding language’, Proceedings of the Aristotelian Society 92, 1992.

Russell used the term in The Analysis of Mind (London: George Allen and Unwin 1921), Chapter 12. For a collection of readings, see Nathan Salmon and Scott Soames (eds.) Propositions and Attitudes (Oxford: Oxford University Press 1988).

For more on this theme, see my Elements of Mind, §34.

Quoted in H. Feigl, The “Mental” and the “Physical” (Minneapolis, Minn.: University of Minnesota 1967), p.138.

‘Meno’ in Hamilton and Cairns (eds.) Plato: Collected Dialogues (Princeton, NJ: Princeton University Press 1961), p. 370.

See John R. Searle, The Rediscovery of the Mind (Cambridge, Mass.: MIT Press 1992), Chapter 7.

The idea of the unconscious in this sense is older than Freud; for an interesting discussion, see Neil Manson, ‘ “A tumbling ground for whimsies”? The history and contemporary relevance of the conscious/unconscious contrast’, in Tim Crane and Sarah Patterson (eds.) History of the Mind–Body Problem (London: Routledge 2000).

Roger Penrose, The Emperor’s New Mind (London: Vintage 1990), p. 526.

Franz Brentano, Psychology from an Empirical Standpoint (translated by Rancurello, Terrell and McAlister; London: Routledge and Kegan Paul 1973), p. 88. For more on the origins of the term ‘intentionality’, see my article, ‘Intentionality’ in the Routledge Encyclopedia of Philosophy (London: Routledge 1998).

See John R. Searle, Intentionality (Cambridge: Cambridge University Press 1983).

For more on the distinction between intentionality and intensionality, see my

Elements of Mind, §§4 and 35.

243

Notes to pp. 36–48

See W.V. Quine, ‘Reference and modality’ and ‘Quantifiers and propositional attitudes’ in L. Linsky (ed.) Reference and Modality (Oxford: Oxford University Press 1971).

See Fred Dretske, Seeing and Knowing (London: Routledge and Kegan Paul 1969), Chapter 1.

These remarks are directed against Quine: see Word and Object (Cambridge, Mass.: MIT Press 1960), especially pp. 219–221.

See D.M. Armstrong, A Materialist Theory of the Mind (London: Routledge and Kegan Paul 1968; reprinted 1993), Chapter 14; and M.G.F. Martin, ‘Bodily awareness: a sense of ownership’, in J. Bermudez, and N. Eilan (eds.) The Body and the Self (Cambridge, Mass.: MIT Press 1995).

Lewis Wolpert, Malignant Sadness: the Anatomy of Depression (London: Faber 1999). This is similar to the description of depression or melancholy given by Sartre in his Sketch for a Theory of the Emotions (London: Methuen 1971; originally published 1939); see especially pp.68–69.

For this distinction, see John Haugeland, ‘The intentionality all-stars’. in J. Tomberlin (ed.) Philosophical Perspectives 4: Action Theory and the Philosophy of Mind (Atascadero, Calif.: Ridgeview 1990), p. 385 and p. 420 fn.6. See also John Searle Intentionality, p. 27, for a related distinction.

Understanding thinkers and their thoughts

I heard the story from P.J. Fitzpatrick unfortunately I have not been able to trace the source.

For a very clear and readable introduction, see the first half of Mark Solms and Oliver Turnbull, The Brain and the Inner World: an Introduction to the Neuroscience of Subjective Experience (New York, NY: Other Press 2002).

For a standard critique of dualism, see Peter Smith and O.R. Jones, The Philosophy of Mind (Cambridge: Cambridge University Press 1986), Chapters 1–3. For contemporary dualism, see W.D. Hart, The Engines of the Soul (Cambridge: Cambridge University Press, 1988), and John Foster, The Immaterial Self (London: Routledge, 1991).

This last claim is rejected by those who hold an ‘externalist’ view of thought and experience: see, for example, John McDowell, ‘Singular thought and the extent of inner space’, in P. Pettit and J. McDowell (eds.) Subject, Thought and Context (Oxford: Clarendon Press 1986). For the ‘brain in a vat’ fantasy, see Hilary Putnam, Reason, Truth and History (Cambridge: Cambridge University Press 1980), Chapter 1.

244

Notes to pp. 49–63

But see John McDowell, ‘On “The reality of the past” in C. Hookway and

P. Pettit (eds.) Action and Interpretation (Cambridge: Cambridge University Press 1978), especially p. 136.

For some behaviourist literature, see W.G. Lycan (ed.) Mind and Cognition (Oxford: Blackwell 1990), §1; for a critique of behaviourism, see Ned Block, ‘Psychologism and behaviourism’, Philosophical Review 90, 1980.

See R.M. Chisholm, Perceiving: a Philosophical Study (Ithaca, NY: Cornell University Press 1957), especially Chapter 11, §3.

For a critique of the behaviourist view of language, which has become a classic, see Chomsky’s review of the behaviourist B.F. Skinner’s book Verbal Behaviour, reprinted in Ned Block (ed.) Readings in the Philosophy of Psychology, volume II (London: Methuen 1980).

See Kathleen Wilkes, ‘The long past and the short history’ in R. Bogdan (ed.) Mind and Common-sense (Cambridge: Cambridge University Press 1991), p. 155.

David Hume, Abstract of A Treatise of Human Nature, L.A. Selby-Bigge (ed.) (Oxford: Oxford University Press 1978), p. 662.

The best place to begin a study of causation is the collection edited by Ernest Sosa and Michael Tooley, Causation (Oxford: Oxford University Press 1993).

Hume, An Enquiry Concerning Human Understanding, Selby-Bigge (ed.) (Oxford: Oxford University Press 1975), §7.

G.E.M. Anscombe, ‘The causation of behaviour’ in C. Ginet and S. Shoemaker (eds.) Knowledge and Mind (Cambridge: Cambridge University Press 1983)

p.179. For another influential non-causal account of the relation between reason and action, see A. Melden, Free Action (London: Routledge and Kegan Paul 1961).

See Ludwig Wittgenstein, Philosophical Investigations, §341. For an excellent introduction to Wittgenstein’s thought on these questions, see Marie McGinn, Wittgenstein and the Philosophical Investigations (London: Routledge 1995).

See ‘Actions, reasons and causes’ in Davidson, Essays on Actions and Events

(Oxford: Oxford University Press 1980).

For perception, see H.P. Grice, ‘The causal theory of perception’ in J. Dancy (ed.) Perceptual Knowledge (Oxford: Oxford University Press 1988); for memory, see C.B. Martin and Max Deutscher, ‘Remembering’, Philosophical Review 75, 1966; for knowledge, see Alvin Goldman, ‘A causal theory of knowing’, Journal of Philosophy 64, 1967; for language and reality, see Dennis D.W. Stampe, ‘Toward a causal theory of linguistic representation’, Midwest Studies in Philosophy II, 1977.

Adam Morton, Frames of Mind (Oxford: Oxford University Press 1980), p. 7.

245

Notes to pp. 63–76

For theoretical entities, see David Lewis, ‘How to define theoretical terms’, in his Philosophical Papers, volume I (Oxford: Oxford University Press 1985). The idea derives from F.P. Ramsey, ‘Theories’, in his Philosophical Papers (ed. D.H. Mellor; Cambridge: Cambridge University Press 1991). For a good account of the claim that mental states are theoretical entities, see Stephen P. Stich, From Folk Psychology to Cognitive Science (Cambridge, Mass.: MIT Press 1983).

For a contrasting view, see J.J.C. Smart, Philosophy and Scientific Realism (London: Routledge and Kegan Paul 1963), and D.M. Armstrong, A Materialist Theory of the Mind, Chapter 12.

I heard R.B. Braithwaite suggest this analogy in a radio programme by D.H. Mellor on the philosophy of F.P. Ramsey, ‘Better than the stars’, BBC Radio 3, 27 February 1978.

This is the approach taken by David Lewis in ‘Psychophysical and theoretical identification’, in Ned Block (ed.) Readings in the Philosophy of Psychology (London: Methuen 1980), volume I.

Morton, Frames of Mind, p. 37. See also Stephen Schiffer, Remnants of Meaning (Cambridge, Mass.: MIT Press 1987), pp. 28–31.

Morton, Frames of Mind, p. 28.

See Robert Stalnaker, Inquiry (Cambridge, Mass.: MIT Press 1984), Chapter 1.

The inference has been famously made, though: see Arthur Eddington, The Nature of the Physical World (Cambridge: Cambridge University Press 1929),

pp. xi–xiv.

The vindication approach has been defended by Jerry Fodor: see

Psychosemantics (Cambridge, Mass.: MIT Press 1987), Chapter 1.

For the elimination approach, see especially Paul M. Churchland, ‘Eliminative materialism and the propositional attitudes’, Journal of Philosophy 78 (1981), and Patricia S. Churchland Neurophilosophy (Cambridge, Mass.: MIT Press 1986).

For a particularly clear statement of this line of argument, see especially Stephen Stich, From Folk Psychology to Cognitive Science.

Churchland, ‘Eliminative materialism and the propositional attitudes’, p. 73

Ibid., p. 76.

Paul M. Churchland, Matter and Consciousness (Cambridge, Mass.: MIT Press 1984), p. 48.

See Hilary Putnam, Representation and Reality (Cambridge, Mass.: MIT Press, 1988).

For more discussion of these points against eliminative materialism, see

T. Horgan and James Woodward, ‘Folk psychology is here to stay’, in W.G. Lycan (ed.) Mind and Cognition, and Colin McGinn, Mental Content (Oxford: Blackwell 1989), Chapter 2.

246

Notes to pp. 77–111

Jane Heal, ‘Replication and functionalism’, in Jeremy Butterfield (ed.) Language, Mind and Logic (Cambridge: Cambridge University Press 1986). See Robert Gordon, ‘Folk psychology as simulation’, Mind fl Language 1 (1986), Alvin Goldman, ‘Interpretation psychologised’, Mind fl Language 4 (1989), and a special issue of Mind fl Language 7, nos. 1 and 2 (1992).

Quine, Word and Object (Cambridge, Mass.: MIT Press 1960), p. 219.

Quine, ‘On mental entities’, in The Ways of Paradox (Cambridge, Mass.: Harvard University Press 1976), p. 227.

C. R. Gallistel, The Organisation of Learning (Cambridge, Mass.: MIT Press 1990), p. 1.

Computers and thought

The example is from Ned Block, ‘The computer model of the mind’, in Daniel

N. Osherson et al. (eds.) An Invitation to Cognitive Science, volume 3, Thinking (Cambridge, Mass.: MIT Press 1990). This is an excellent introductory paper which covers ground not covered in this chapter for example, the Turing test (see below).

For an account of Turing’s life, see Alan Hodges’s biography, Alan Turing: the Enigma (New York, NY: Simon n Schuster 1983).

In fact, the machine’s tape needs to be infinitely long. For an explanation, see, for example, Penrose, The Emperor’s New Mind, Chapter 2.

See Penrose, The Emperor’s New Mind, p. 54. See also Chapters 2 and 3 of Joseph Weizenbaum, Computer Power and Human Reason (Harmondsworth: Penguin 1976).

See Weizenbaum, Computer Power and Human Reason, pp. 51–53.

For a very clear exposition of the Church–Turing thesis, see Clark Glymour,

Thinking Things Through (Cambridge, Mass.: MIT Press 1992), pp. 313–315.

For the distinction, see John Haugeland, Mind Design (Cambridge, Mass.: MIT Press 1981), Introduction, §5.

See D.H. Mellor, ‘How much of the mind is a computer?’, in D.H. Mellor,

Matters of Metaphysics.

Jerry Fodor, The Language of Thought (Hassocks: Harvester 1975); see also Gallistel, The Organisation of Learning, p. 30.

Penrose, however, thinks that the ‘ultimate’ physics will not be computable, and that this fact is relevant to the study of the mind: see The Emperor’s New Mind, p. 558.

See Dennett’s Brainstorms (Hassocks, Harvester Press 1978).

See Artificial Intelligence (Cambridge, Mass.: MIT Press 1985). p. 178.

Searle, Minds, Brains and Science (Harmondsworth: Penguin 1984). p. 44.

247

Notes to pp. 112–121

G.W. Leibniz, Selections (ed. P. Wiener; New York, NY: Scribner 1951). p. 23; see also L.J. Cohen, ‘On the project of a universal character’ Mind 53, 1954.

George Boole, The Laws of Thought (Chicago, Ill.: Open Court 1940), volume II, p. 1.

See Haugeland, Artificial Intelligence, p. 168 fn 2.

Margaret Boden (ed.) The Philosophy of Artificial Intelligence (Oxford: Oxford University Press 1990), Introduction, p. 3; the previous quotation is from Alan Garnham, Artificial Intelligence: an Introduction (London: Routledge 1988), p. xiii.

See David Marr, ‘Artificial intelligence: a personal view’, in Margaret Boden (ed.) The Philosophy of Artificial Intelligence, and in John Haugeland (ed.) Mind Design.

See Jack Copeland, Artificial Intelligence: a Philosophical Introduction

(Oxford: Blackwell 1993), pp. 26 and 207–208.

Turing’s paper is reprinted in Boden (ed.) The Philosophy of Artificial Intelligence. For more on the Turing test, see Ned Block, ‘The computer model of the mind’, and his ‘Psychologism and behaviourism’.

I am ignoring another controversial claim: that computers cannot think because a famous mathematical theorem, Gödel’s theorem, shows that thinking can involve recognising truths which are not provable – and hence not computable. This argument was first proposed by J.R. Lucas see, for example, The Freedom of the Will (Oxford: Oxford University Press 1970) – and has been revived by Roger Penrose in The Emperor’s New Mind. Some writers think the Penrose–Lucas thesis is very important; others dismiss it in a few paragraphs. This is true both of the friends of the computational picture of the mind – see, for example, Glymour, Thinking Things Through,

pp. 342–343 – and its enemies – see Dreyfus, What Computers Still Can’t Do (Cambridge, Mass.: MIT Press, revised edition 1992), p. 345. In this book I will put the thesis to one side, as the issues behind it cannot be properly assessed without a lot of technical knowledge.

This story is from Harry Collins, ‘Will machines ever think?’, New Scientist, 20 June 1992, p. 36.

George Orwell, ‘Politics and the English language’, in Inside the Whale and other Essays (Harmondsworth: Penguin 1957), p. 156.

Dreyfus, What Computers Still Can’t Do, p. 3.

Ibid., p. xvii.

See Gilbert Ryle, The Concept of Mind (London: Hutchinson 1949), Chapter 2.

Dreyfus, What Computers Still Can’t Do, p. 37

Ibid., p. 27.

248

Notes to pp. 122–143

For a discussion of CYC , see Jack Copeland, Artificial Intelligence: a Philosophical Introduction (Oxford: Blackwell 1993), Chapter 5, §6, from which I have borrowed these details. Dreyfus discusses CYC in detail in the introduction to What Computers Still Can’t Do.

What Computers Still Can’t Do, p. 43.

For the frame problem, see Daniel Dennett, ‘Cognitive wheels: the frame problem of AI’, in Margaret Boden (ed.) The Philosophy of Artificial Intelligence; Jack Copeland, Artificial Intelligence, Chapter 5.

See ‘Minds, brains and programs’, Behavioral and Brain Sciences 1980, and

Minds, Brains and Science (Harmondsworth, Penguin, 1984), Chapter 2.

Paul M. Churchland and Patricia Smith Churchland, ‘Could a machine think?’,

Scientific American, January 1990, p. 29.

Quoted by Dreyfus, What Computers Still Can’t Do, p. 129.

See Copeland, Artificial Intelligence, Chapters 5 and 9, for a fair-minded assessment of the failures of AI.

The mechanisms of thought

Hobbes Leviathan, Part I, ‘Of Man’, Chapter 5, ‘Of reason and science’.

See John Haugeland, Mind Design, Introduction.

For more discussion of intentionalism, see my Elements of Mind, especially Chapters 3 and 5.

For brilliant discussions of these questions, though, see Dennett, ‘Towards a cognitive theory of consciousness’ and ‘Why you can’t make a computer that feels pain’, in his Brainstorms. See also John Haugeland, ‘The nature and plausibility of cognitivism’, in Haugeland J. (ed.) Mind Design.

This was certainly Hilary Putnam’s aim in ‘The nature of mental states’ and ‘Philosophy and our mental life’ in his Mind, Language and Reality (Cambridge: Cambridge University Press 1975) and other papers that put forward the functionalist theory. It was not his aim, I think, to put forward a computational theory of mind.

Quoted by Gregory McCulloch, Using Sartre (London: Routledge 1994), p. 7.

The example comes from Dennis Stampe, ‘Toward a causal theory of linguistic representation’, Midwest Studies in Philosophy.

See Donald Davidson, ‘Theories of meaning and learnable languages’, in his Inquiries into Truth and Interpretation (Oxford: Oxford University Press 1984).

Fodor sometimes uses a nice comparison between thinking and the sorts of deductions Sherlock Holmes performs to solve his cases. See ‘Fodor’s guide to

249

Notes to pp. 145–159

mental representation’, in A Theory of Content and Other Essays (Cambridge, Mass.: MIT Press 1990), p. 21.

Haugeland, ‘Semantic engines: an introduction to mind design’, in Haugeland (ed.) Mind Design, p. 23.

Fodor’s guide to mental representation’, p. 22.

For a fairly accessible introduction to Chomsky’s philosophical ideas, see his

Rules and Representations (Oxford: Blackwell 1980).

For a critical discussion of this notion, see Stephen P. Stich, ‘What every speaker knows’, Philosophical Review 80, 1971.

See Peter Lipton, Inference to the Best Explanation (London: Routledge 1991).

D.M. Armstrong argues that perception is belief in A Materialist Theory of the Mind, Chapter 7.

Jerry A. Fodor, The Modularity of Mind (Cambridge, Mass.: MIT Press 1983).

For informational encapsulation as the most important feature of modules, see The Modularity of Mind, pp. 37 and 71; despite some changes in his views over the years, this point has remained constant: see The Mind Doesn’t Work That Way (Cambridge, Mass.: MIT Press 2000), p. 63.

The Modularity of Mind, p. 80.

See Chomsky, Rules and Representations.

See S. Baron-Cohen, Mindblindness: an Essay on Autism and Theory of Mind

(Cambridge, Mass.: MIT Press 1995).

See Fodor, The Mind Doesn’t Work That Way, Chapter 4, especially pp. 71–78.

See Fred Dretske, ‘Machines and the mental’, Proceedings and Addresses of the American Philosophical Association 59, September 1985.

Searle, for example, thinks that ‘the homunculus fallacy is endemic to computational models of cognition’, The Rediscovery of the Mind (Cambridge, Mass.: MIT Press 1992), p. 226.

The view taken in this paragraph is closer to that of William G. Lycan,

Consciousness (Cambridge, Mass.: MIT Press 1987).

A situated grandmother?’ Mind and Language 2, 1987, 67.

See Quine, ‘Methodological reflections on current linguistic theory’, in Donald Davidson and Gilbert Harman (eds.) Semantics of Natural Language (Dordrecht: Reidel 1972).

For a useful discussion of tacit knowledge, see Martin Davies, ‘Tacit knowledge and subdoxastic states’, in Alexander George (ed.) Reflections of Chomsky (Oxford: Blackwell 1989).

See Fodor, Psychosemantics (Cambridge, Mass.: MIT Press 1987), Chapter 1.

See H. Dreyfus and S. Dreyfus, ‘Making a mind versus modelling the brain’, in Boden (ed.) The Philosophy of Artificial Intelligence.

250

Notes to pp. 161–177

See Haugeland, Artificial Intelligence, pp. 112 ff.

W. Bechtel and A. Abrahamsen, Connectionism and the Mind (Oxford: Blackwell 1991), Chapter 6; Andy Clark, Microcognition (Cambridge, Mass.: MIT Press 1989), Chapter 9.

See Jack Copeland, Artificial Intelligence, Chapter 10, §5.

See, for example, Jack Copeland, Artificial Intelligence, Chapter 9, §8, and Chapter 10, §4.

See, for example, Robert Cummins, Meaning and Mental Representation

(Cambridge, Mass.: MIT Press 1989), pp. 147–156.

See Cummins’s discussion in Meaning and Mental Representation, pp. 150–152.

Meaning and Mental Representation, p. 157 fn 6.

D.E. Rumelhart and J.L. McClelland, ‘PDP models and general issues in cognitive science’, in D.E. Rumelhart and J.L. McClelland (eds.) Parallel Distributed Processing: Explorations in the Microstructure of Cognition, volume 1 (Cambridge, Mass.: MIT Press 1986), p. 132.

See Scott Sturgeon, ‘Good reasoning and cognitive architecture’, Mind fl Language 9, 1994.

J. Fodor and Z. Pylyshyn, ‘Connectionism and cognitive architecture: a critical analysis’, Cognition 28, 1988.

Explaining mental representation

Fodor, Psychosemantics, p. 97.

See Fodor, ‘Semantics Wisconsin style’, in A Theory of Content and Other Essays, p. 32. Notice that Fodor later (‘A theory of content’) weakens the requirement to a sufficient condition only.

See C.L. Hardin, Color for Philosophers (Indianapolis: Hackett 1988).

Fodor is one: see, for example, A Theory of Content, p. x.

For this sort of scepticism, see Stephen Stich, ‘What is a theory of mental representation?’, Mind 101, 1992, and Michael Tye, ‘Naturalism and the mental’ Mind 101, 1992.

‘Semantics, Wisconsin style’ in A Theory of Content, p. 33.

See H.P. Grice, ‘Meaning’, Philosophical Review 66, 1957.

Psychosemantics, Chapter 4.

For this point, see Fred Dretske, Knowledge and the Flow of Information (Cambridge, Mass.: MIT Press 1981), p. 76, and ‘Misrepresentation’, in R. Bogdan (ed.) Belief (Oxford: Oxford University Press 1985), p. 19.

251

Notes to pp. 180–192

For the disjunction problem, see Fodor, A Theory of Content, Chapter 3, especially pp. 59ff; Papineau, Philosophical Naturalism (Oxford: Blackwell 1993), Chapter 3, pp. 58–59.

D.L. Cheney and R.M. Seyfarth, How Monkeys See the World: Inside the Mind of Another Species (Chicago, Ill.: University of Chicago Press 1990), p. 169. I am indebted to Pascal Ernst for this example.

Fodor, A Theory of Content, p. 90, takes a different view.

For one of the original statements of this idea, see Dennis Stampe, ‘Toward a causal theory of linguistic representation’. For an excellent critical discussion, see Cummins, Meaning and Mental Representation, pp. 40ff.

See ‘Misrepresentation’. For the general idea of a teleological function, see Karen Neander, ‘The teleological notion of “function” ’, Australasian Journal of Philosophy 69, 1991, and David Papineau Philosophical Naturalism, Chapter 2.

The term is Stampe’s: see ‘Toward a causal theory of linguistic representation’, especially pp. 51–52.

Dretske, ‘Misrepresentation’, p. 26.

The theory was first proposed in Psychosemantics, Chapter 4, and later refined in A Theory of Content, Chapter 4. For discussion, see Cummins, Meaning and Mental Representation, Chapter 5, and the essays in George Rey and Barry Loewer (eds.) Meaning in Mind (Oxford: Blackwell 1991).

This theory has been defended by J.T. Whyte, ‘Success semantics’, Analysis 50, 1990, and David Papineau, Philosophical Naturalism. The seeds of the idea are in F.P. Ramsey, ‘Facts and propositions’ in his Philosophical Papers, and developed by R.B. Braithwaite ‘Belief and action’, Proceedings of the Aristotelian Society, Supplementary Volume 20, 1946.

Compare Robert Stalnaker, Inquiry, Chapter 1.

See Whyte’s papers ‘Success semantics’ and ‘The normal rewards of success’

Analysis 51 (1991).

This point was anticipated by Chisholm, Perceiving, Chapter 11, fn 13, against Braithwaite’s version of the success theory in his paper ‘Belief and action’.

See Papineau, Philosophical Naturalism, Chapter 3, and Ruth Garrett Millikan, Language, Thought and other Biological Categories (Cambridge, Mass.: MIT Press 1986). In this section I follow Papineau’s version of the theory, which is not exactly the same as Millikan’s, for reasons which need not concern us.

Davidson’s ‘swampman’ example is in ‘Knowing one’s own mind,’ reprinted in Q. Cassam (ed.) Self-Knowledge (Oxford: Oxford University Press 1994). Cummins uses this objection against Millikan and Papineau in Meaning and

252

Notes to pp. 193–202

Mental Representation, Chapter 7. See Millikan, Language, Thought and other Biological Categories, p. 94, for her response.

Papineau, Philosophical Naturalism, p. 93.

See L. Wright ‘Functions’ Philosophical Review 82, 1973.

For a criticism of this notion of function, see Fodor, The Mind Doesn’t Work That Way, p. 85.

See J.L. Barkow, L. Cosmides and J. Tooby (eds.) The Adapted Mind: Evolutionary Psychology and the Generation of Culture (New York, NY: Oxford University Press 1992).

For an excellent introduction to these issues, see Paul Griffiths and Kim Sterelny, Sex and Death: an Introduction to the Philosophy of Biology (Chicago, Ill.: University of Chicago Press 1999).

See Richard Dawkins, The Selfish Gene (Oxford: Oxford University Press 1976). There is no space to discuss Dawkins’s ideas in this book. Some of these ideas are defended by Daniel C. Dennett in Darwin’s Dangerous Idea (London: Allen Lane 1995). My own sympathies are with Fodor’s criticisms in his review of Dawkins’s Climbing Mount Improbable in his collection In Critical Condition (Cambridge, Mass.: MIT Press 1998), especially pp. 167–169.

Paul Griffiths, ‘Adaptation and adaptationism’, in R. Wilson and F. Keil (eds.) The MIT Encyclopedia of Cognitive Science (Cambridge, Mass.: MIT Press 1999), p. 3.

For the history of the restaurant, see Rebecca Spang’s excellent book, The Invention of the Restaurant (Cambridge, Mass.: Harvard University Press 2000).

For a particularly clear statement of this point, see Fodor, In Critical Condition, pp. 163–166.

S.J. Gould and R. Lewontin, ‘The spandrels of San Marco and the Panglossian paradigm: a critique of the adaptationist programme’, Proceedings of the Royal Society of London 205, 1979, pp. 581–598. See also Lewontin’s collection of essays, It Ain’t Necessarily So (London: Granta Books 1999). Criticism of Gould and Lewontin can be found in Dennett’s Darwin’s Dangerous Idea (London: Allen Lane 1995).

Paul Bloom, ‘Evolution of language’ in The MIT Encyclopedia of Cognitive Science, p. 292.

Fodor, In Critical Condition, p. 166.

Fodor gives a decisive argument for this claim in The Mind Doesn’t Work That Way on pp. 80–84.

Fodor, Psychosemantics, p. 97

Wittgenstein, On Certainty (Oxford: Blackwell 1979), §141

253

Notes to pp. 203–216

My account of this strategy is drawn from Cummins, Meaning and Mental Representation, Frances Egan, ‘Individualism, computation and perceptual content’ Mind 101, 1992 (especially pp. 444–449). I do not mean to imply that all these philosophers will agree with all aspects of the strategy as I define it.

See Egan, ‘Individualism, computation and perceptual content,’ pp. 450–454; and Cummins, Meaning and Mental Representation, Chapter 8.

For this analogy see Hartry Field, Postscript to ‘Mental representation’, in Ned Block (ed.) Readings in the Philosophy of Psychology volume II (London: Methuen 1980). Field credits the analogy to David Lewis. For a use of the analogy in something closer to the sense used here, see Robert Matthews, ‘The measure of mind’, Mind 103, 1994.

See Davidson, ‘Reality without reference’, in Inquiries into Truth and Interpretation, especially pp. 224–225.

See David Marr, Vision (San Francisco, Calif.: Freeman 1982). An accessible, non-technical account of Marr’s theory is given in Kim Sterelny, The Representational Theory of Mind (Oxford: Blackwell 1990).

This is in fact the view taken by Egan and Cummins: see ‘Individualism, computation and perceptual content’, p. 452, and Meaning and Mental Representation, pp. 102–108.

Consciousness and the mechanical mind

Those interested only in the problem of consciousness can skip this introductory section, which is intended to link the question about consciousness to the rest of the book.

This remark is from Gregory McCulloch, ‘Scientism, mind and meaning’, in P. Pettit and J. McDowell (eds.) Subject, Thought and Context (Oxford: Clarendon Press 1986), p. 82. See his The Mind and its World (London: Routledge 1995) for a fuller account. The discussion in this chapter is particularly indebted to discussions with the late Greg McCulloch.

Francis Bacon, Advancement of Learning, Book 2, iv, 2.

For an example of this kind of response, see Michael Tye, The Imagery Debate

(Cambridge, Mass.: MIT Press 1992), Chapters 1 and 2.

By David Rosenthal, ‘A theory of consciousness’ in Block, Flanagan and Güzeldere (eds.) The Nature of Consciousness (Cambridge, Mass.: MIT Press 1995).

Thomas Nagel, ‘What is it like to be a bat?’ in Nagel, Mortal Questions

(Cambridge: Cambridge University Press 1979).

254

Notes to pp. 217–231

Ned Block uses the term in this way: see ‘Inverted earth’ in Block, Flanagan and Güzeldere (eds.) The Nature of Consciousness.

This is how David Chalmers expresses it in The Conscious Mind (Oxford: Oxford University Press 1996).

See Daniel Dennett, ‘Quining qualia’, in Lycan (ed.) Mind and Cognition.

For intentionalist theories of mind, see Michael Tye, Ten Problems of Consciousness (Cambridge, Mass.: MIT Press 1995), and Gilbert Harman, ‘The intrinsic qualities of experience’, in Block, Flanagan and Güzeldere (eds.) The Nature of Consciousness. For a general survey, see my Elements of Mind, Chapter 3.

For discussion of this principle, which he calls the ‘completeness of physics’, see David Papineau, Thinking about Consciousness (Oxford: Oxford University Press 2002). For some critical discussion, see Tim Crane, Elements of Mind, Chapter 2.

David Lewis, ‘An argument for the identity theory’, in Philosophical Papers,

volume I (Oxford: Oxford University Press 1985), p. 105.

See Ned Block, ‘How to find the neural correlate of consciousness’, in A. O’Hear (ed.) Contemporary Issues in the Philosophy of Mind (Cambridge: Cambridge University Press 1998).

See Chalmers, The Conscious Mind, for a detailed discussion of zombies; for an earlier version of the same idea, see Ned Block, ‘Troubles with functionalism’, in Block (ed.) Readings in the Philosophy of Psychology, volume I.

This idea comes from Saul Kripke’s influential discussions in Naming and Necessity (Oxford: Blackwell 1980), lecture III.

See Frank Jackson’s ‘Epiphenomenal qualia’ in Lycan (ed.) Mind and Cognition, and the responses to the argument reprinted there by David Lewis, ‘What experience teaches’, and Laurence Nemirow, ‘Physicalism and the cognitive role of acquaintance’.

Daniel Dennett is one: see Consciousness Explained (London: Allen Lane 1991).

Consciousness Explained, p. 380ff.

In Critical Condition, p. 73.

255

Index

256

adaptation and adaptationism 194–5, 233

algorithms 87–91, 233; automatic

104–9; functions and 87–8; see also Turing machines

analogue representation 101

and-gate 113, 145

animal psychology 80–1

Anscombe, G.E.M. 59 Aquinas, St Thomas 31 argument (of function) 86 arguments: valid 144

Aristotle 2–3

Artificial Intelligence 114-18 Asymmetric Dependence theory

(Fodor) see representation

Babbage, Charles 114

Behaviourism 49–52, 117, 233 belief and thought 24

belief content 25; success theory of 185–9

binary notation 97 biological function: mental

representation and 189–94; aetiological theory of 193

black boxes 105–6

Boden, Margaret 114

Boole, George 113-4 brains and computers 163 Brentano, Franz 31

Brentano’s Thesis 32, 36–40

causal laws 62, 175

causality 55-62; counterfactuals

and 56; explanation and 56–7;

regularities and 57–8 ceteris paribus clauses 158 Cheney, D.L. 180

Chinese Room 123–8

Chomsky, Noam 147, 152, 197, 241,

245

Church, Alonzo 99

Church’s Thesis 99, 103

Churchland, Patricia 127–8

Churchland, Paul 72–5, 127–8 cognition: computational theory of

133

common-sense psychology 53–4; applied to animals 80; elimination of 72–7; ontology of 72; and

scientific psychology 70–80; Theory, Theory of see Theory Theory; theory versus simulation 77–80; vindication of 71–2

computability: of theories 104 computable functions 88 computation: vehicles of 164 computers 83–129; thinking 110–15 conceptual definitions see definitions connectionism 159–67

consciousness 26–30, 215–227; qualia

and 39, 217–18, 235–6

‘consensus reality’ 122

convention 20

counterfactuals 56

Cummins, Robert 164, 181, 203-4

CYC project 121–2

Index

257

da Vinci, Leonardo 2 Davidson, Donald 59, 192 definitions: conceptual and

naturalistic 172–5; reductive

169–172

Dennett, Daniel 107, 155, 218, 228

Descartes, René 2, 4–5, 28, 46, 234 desires: biological function and 189–

91; satisfaction of 188

digital representation 101

disjunction problem 179, 185

Dretske, Fred 182

Dreyfus, Hubert 118–23, 128, 159

dualism 45–6, 233–4

effective procedures: 87; Turing machines and 99; see also algorithms

Einstein, Albert 26, 231

eliminative materialism 72–4, 78, 197 Emperor’s New Mind, The (Penrose) 29 ENIAC 108

error: problem of 178–85 evolutionary psychology 194–200

existential generalization 34–5

expert systems 115

explanation: causality and 56–7 extensional contexts 33

Feigl, Herbert 26

flow charts 88–9

Fodor, Jerry 103, 141–3, 146, 148–54,

156, 160, 166, 170–1, 175, 183–5,

199, 201, 231

folk psychology see common-sense psychology

frame problem 123

Freud, Sigmund 29, 75

function(s): algorithms and 85–92; biological 189–93; computable 88; instantiating versus computing

102–4; mathematical 85;

teleological 182; truth- 86, 114 functional analysis (of machines) 106

Galileo 3

Gallistel, C.R. 80–1

Glanvill, Joseph 44

GOFAI (Good Old-Fashioned AI) 161 Grice, H.P. 175

Haugeland, John 145, 161

Heal, Jane 77

Heuristics 109, 118, 159

Hobbes, Thomas 4, 20, 130

homunculus fallacy 155

Hume, David 55, 57

ideal conditions 181–2

idealism 45–7

ideas: as mental images 21 inferences 143–4

informational encapsulation 151

intensionality 32–5

intentionality 1, 30–6; intensionality

and 32–5; intention and 32; of mental states 37–8; original versus derived 40, 192; see also Brentano’s Thesis

interpretation function 203

knowing how/knowing that 120 knowledge: argument 227–8; tacit

see tacit knowledge; without observation 47

la Mettrie, Julien de 4, 5

language of thought see Mentalese Last Judgement (Michelangelo) 17 laws of nature 57

Laws of Thought, The (Boole) 113 Leibniz, G.W. 114

Lenat, Doug 122

Leviathan (Hobbes) 130

Lewis, David 220

linguistic representation 20–3

Locke, John 20

logic 143–4; rules of 157–8

Mach bands 149–50

machine table 93–5, 98, 132

materialism 5–6, 45–6, 219; see also physicalism; eliminative materialism

meaning: concept of 175; natural (Grice) 176

mechanical world picture 2–4 Meno (Plato) 28

mental representation(s) 22–6; biological function and 189–93; causal theories of 175; mind as ‘containing’ 124; non-reductive theory of 200–7; and success in action 185–8; as a theoretical notion 170; see also representation

mental states: intentionality of 36–9 Mentalese 135–8; argument for

140–8; problems for 154–9; tacit

knowledge of 147

Michelangelo 17

Millikan, Ruth 189

mind–body problem 43–7, 219, 230

misrepresentation problem 179–82

modus ponens 144–5, 157

modularity thesis 148–54

Molière 46

Morton, Adam 77

Nagel, Thomas 216, 222 natural meaning see meaning

necessary and sufficient conditions, 14 neural networks 163

normal conditions see ideal conditions

ontology 73

‘organic’ world picture 4 Orwell, George 33, 119

other minds 47–54; scepticism about 48

pain 37–8, 132–3

Papineau, David 189

parallel distributed processing (PDP) 162

Penrose, Roger 29

phenomenology 212–14, 235

physicalism 45, 58, 219–27, 235; see also materialism

pictorial representation see

representation Pioneer 10 8

Plato 28

programs 108, 160; variable

realisation of 109

propositional attitudes 24–6; thoughts and 26

psychology: common-sense see common-sense psychology; natural laws of 6; scientific 70–7

Psychology from an Empirical Standpoint (Brentano) 31

Pylyshyn, Zenon 151, 166

qualia 39, 131, 215–19, 235

Quine, W.V. 77–8, 156, 173

reduction(ism) 5–6, 169–172,

219–226

reductive definitions see definitions regularities: accidental 58; causality

and 57–8

reliable indication 177

representation 8–40; analogue 101; Asymmetric Dependence theory of 184–6; causal theories of 175–6;

convention and 20; digital 101;

distributed 162; idea of 11–13;

ideas and 21; linguistic 20–3; medium of 136; mental see mental representation; pictorial 13–28; of situations 20; vehicles of 136

Representational Theory of Mind 133 rules: following versus conforming to

154–6; thinking and 119

Russell, Bertrand 25

Rutherford, E. 6

Sartre, Jean-Paul 134

scepticism 48–9

Searle, John 32, 111, 118, 123–8, 156

semantics, syntax and see syntax Seyfarth, R.M. 180

Simon, Herbert 128

Socrates 28

Sturgeon, Scott 165 substitutivity salva veritate 33

sufficient conditions see necessary and sufficient condition

syntax: semantics and 137–140

tacit knowledge 67, 79–80, 147, 153,

156

teleology 236

theories 53, 63–4; computability of

104

Theory Theory (of common-sense psychology) 63, 68–9, 76–80

thinking see thought

thinking computers 109–14 thought: belief and 24; causal

picture of 54–62; computers and 83–129; consciousness and 26–31; definition of term 23; language

of see Mentalese; propositional attitudes and 26; and rules 119–24;

‘Tower Bridge’ picture 203–4 truth-functions see functions Turing, Alan 114, 116

Turing machines 92–9; effective procedures and 99; internal states of 96; machines tables 93–5, 98,

103, 132; representation and 103–4

Turing Test 117–18

type/token distinction 136 ‘universal character’ see Leibniz

validity 144–5

value (of function) 86 variables 86

vision: computational theories of 146–7

vitalism 74–5

Vogt, Karl 5

What Computers Can’t Do (Dreyfus) 118

Wittgenstein, Ludwig 9, 18, 58–9,

173, 202, 222

Wolpert, Lewis 38