# 理论与实践（英文版）

内容简介

这是一本阐述人工智能基本理论及其实际应用的教材，由三位资深的人工智能专家精心编著而成。针对机器智能系统开发中涌现出的表达与计算问题，本书介绍了最新的研究成果，并讨论了系统实现中涉及到的实际问题。作者深入探讨了用于解决学习、规划和不确定性问题的传统符号推理技术，例如演绎推理、对策树等，并介绍了神经网络、概率推理等新技术。书中出现的重要算法在每章后面都附有其Lisp实现的源代码，以供读者在实验时进行参考。另外，本书还给出了丰富的人工智能应用系统的实例。

本书可作为高等院校计算机、控制、机电、数学等专业人工智能课程的教材，也可供从事人工智能研究及应用的科学工作者和工程技术人员学习参考。

目录

introduction

robot explorers, 2

1.1 artificial intelligence in practice 3

examples of artificial intelligence systems, 4

1.2 artificial intelligence theory 5

examples of artificial intelligence theory, 6

1.3 identifying and measuring intelligence

1.4 computational theories of behavior 9

representation, 10

syntax and semantics, 11

1.5 automated reasoning 12

inference and symbolic manipulation, 13

representing common-sense knowledge, 14

combinatorial problems and search, 14

complexity and expressivity, 15

1.6 how this book is organized 16

summary 18

background 19

exercises 20

symbolic programming

.2.1 rule-based reactive system example 25

representing sensors and sensor values as symbols, 26

2.2 introduction to lisp 27

language requirements,27

common lisp, 27

lists and lisp syntax, 28

symbols, 28

programs and documentation, 28

2.3 interacting with lisp 29

the lisp interpreter, 29

2.4 functions in lisp 31

function invocation, 31

procedural abstraction, 32

conditional statements, 33

recursive functions, 35

evaluating functions in fa. les, 35

2.5 environments, symbols, and scope 36

assigning values to symbols, 36

eval and apply revisited, 37

structured environments, 38

variables, 39

lexical scoping, 40

2.6 more on functions 42

functions with local state, 42

lambda and functions as arguments, 43

2.7 list processing 44

suspending evaluation using quote, 44

building and accessing elements in lists, 45

lists in memory, 45

modifying list structures in memory, 46

alternative parameter-passing conventions, 47

predicates on lists, 48

built-in list manipulation functions, 48

optional arguments, 49

list-processing examples, 49

data abstraction, 51

2.8 iterative constructs 53

mapping functions to arguments, 53

general iteration, 54

simple iteration, 55

2.9 monitoring and debugging programs 56

tracing and stepping through programs, 56

formatted output, 58

2.10 rule-based reactive system revisited 58

summary 64

background 65

exercises 65

representation and logic

3.1 propositional logic 73

syntax for p, 74

semantics for p, 75

32 formal system for/v 76

logical axioms of p, 77

normal forms, 78

rules of inference, 79

proofs and theorems, 79

resolution rule of inference, 80

completeness, soundness, and decidability, 81

computational complexity, 82

solving problems with logic, 82

3.3 automated theorem proving in p 84

goal reduction in p, 85

proof by contradiction, 87

3.4 predicate calculus 88

syntax for pc, 89

translating english sentences into logic, 90

more about quantification, 91

semantics for pc, 91

3.5 formal system for pc 93

specifying programs in prolog, 94

eliminating quantifiars, 94

learning and deductive inference, 96

decidability, 98

3.6 automated theorem proving in pc 99

matching and universal instantiation, 99

goal reduction in pc, 101

unification, 103

concept description languages, 107

semantic networks, 108

3.7 nonmonotonic logic 109

closed-world assumption, 109

abductive and default reasoning, 111

minimal models, 112

3.8 deductive retrieval systems 113

forward and backward chaining, 114

reason maintenance systems, 116

nonmonotonic data dependencies, 118

summary 119

background 121

exercises 122

lisp implementation: data dependencies 127

search

4.1 basic search issues 133

search spaces and operators, 134

appliance assembly example, 135

exploiting structure to expedite search, 136

4.2 blind search 137

depth-first search, 138

depth-first search is space efficient, 139

breadth-first search, 140

breadth-first search is guaranteed, 141

iterative-deepening search, 141

iterative-deepening search is asymptotically optimal, 143

searching in graphs, 144

4.3 heuristic search 144

best-first search, 145

admissible evaluation functions, 146

4.4 optimization and search 149

hill-climbing search, 149

local minima and maxima, 151

gradient search, 153

simulated annealing, 153

simulated evolution and genetic algorithms, 154

application to vehicle routing, 158

4.5 adversary search 160

minimax search, 160

a-b search, 163

4.6 indexing in discrimination trees 166

storing and retrieving predicate calculus formulas, 167

decision trees, 168

summary 169

background 171

exercises 171

lisp implementation: discrimination trees 174

5 learning

5.1 classifying inductive learning problems 180

supervised learning, 180

classification and concept learning, 182

unsupervised learning, 183

online and batch learning methods, 183

52 theory of inductive inference 183

the role of inductive bias, 184

restricted hypothesis space biases, 184

preference biases, 185

probably approximately correct learning, 186

pac learnable concept classes, 187

finding consistent hypotheses, 188

5.3 version spaces 188

attributes, features, and dimensions, 189

specializing and generalizing concepts, 190

maintaining version-space boundaries, 191

data structures for learning, 192

implementing the version-space method, 194

optimal method for conjunctions of positive literals, 195

5.4 decision trees 195

implementing a preference for small decision trees, 196

disorder and information theory, 199

decision trees in practice, 202

5.5 network learning methods 202

model for computation in biological systems, 2113

adjustable weights and restricted hypothesis spaces, 205

5.6 gradient guided search 206

searching in linear function spaces, 207

experimental validation, 208

nonlinear function spaces and artificial neural

networks, 210

deriving the gradient for multilayer networks, 211

error backpropagation procedure, 212

implementing artificial neural networks in lisp, 214

representational and computational issues, 217

networks with adjustable thresholds, 218

comparing the performance of different networks, 220

5.7 perceptrons 221

perceptron learning rule, 222

linearly separable funct/ons, 223

5.8 radial basis functions 224

approximating functions by combining gaussians, 225

two-step strategy for adjusting weights, 227

functions with multidimensional input spaces, 230

5.9 learning in dynamic environments 231

reinforcement learning. 231

computing an optimal policy, 235

online methods for learning value functions, 235

learning by exploration. 239

summary 24o

background 242

exercises 243

lisp implementation: learning algorithms 249

6 advanced representation

6.1 temporal reasoning 256

6.2 the situation calculus 257

constraining fluents in situations, 260

frame problem, 260

qualification problem, 262

6.3 first-order interval temporal logic 264

syntax for the interval logic, 265

representing change in the interval logic, 267

semantics for the interval logic, 268

6.4 managing temporal knowledge 269

6.5 knowledge and belief 273

possible-worlds semantics, 277

6.6 spatial reasoning 279

representing spatial knowledge, 279

planning paths in configuration space, 281

path planning as graph search, 282

locally distinctive places, 285

summary 286

background 287

exercises 288

lisp implementation: temporal reasoning 291

planning

7.1 state-space search 298

what is planning?, 298

planning as .%arch, 300

representing and solving search problems, 301

state progression, 3o2

goal regression, 303

means/ends analysis,

machine assembly example, 305

operant schemas, 306

block-stacking problems, 307

7.2 least commitment planning 308

search in the space of partially ordered plans, 309

sound, complete, andsystematic search, 312

block-stacking example, 313

recognizing and resolving conflicts, 316

variables in par'dally ordered plans, 317

7.3 planning in a hierarchy of abstraction spaces 320

analysis of planning with levels of abstraction, 321

towers-of-hanoi problems, 322

task reduction planning, 325

7.4 adapting previously generated plans 326

indexlng, retrieving, and adapting plans, 326

analysis of adaptive planning, 331

7.5 planning with incomplete information 332

the copier-repair problem, 332

generating conditional plans, 335

contexts represent possible sets of observations, 336

7.6 more expressive models of action 340

conditional effects, 341

isjunctive preconditions, 342

universally quantified effects, 343

wandering briefcase example, 344

processes outside the planner's control, 345

summary 346

background 347

exercises 348

lisp implementation: refining partially ordered

plans 351

uncertainty

8.1 motivation for reasoning under uncertainty 357

sources of uncertainty, 357

representing uncertain knowledge, 357

applications involving uncertainty, 358

8.2 probability theory 359

frequency interpretation of probability, 359

save interpretation of probability, 359

desrees of belief, 360

random variables and distributions, 361

conditional probability, 362

calculus for combining probabilities,

conclitional independence, 366

maintaining consistency, 367

8.3 probabilisfic networks 368

graphical models, 369

path-based characterization of independence, 371

quantifying probabilistic networks, 372

inference in probabflistic networks, 373

exact inference in tree-structured networks, 374

propagating evidence in trees, 378

exact inference in singly connected networks, 380

approximate inference using stochastic simulation, 382

likelihood-weighting algorithm, 384

probabilistic reasoning in medicine, 386

8.4 decision theory 388

preferences and utilities, 389

decision tree methods, 390

computing the value of information, 393

automated decision making in medidne, 394

summary 395

background 396

exercises 397

lisp implementation: inference in probabilistic

networks 399

9 image understanding

9.1 sensors and images 410

digital images, 410

noise in image processing, 410

9.2 computer vision 412

understanding images, 413

vision versus thought, 414

9.3 human vision 415

transferring information from the eye to the brain, 415

compressing visit information, 417

9.4 vision as a recovery problem 418

what to recover, 420

geometric aspects of image formation, 420

perspective projection, 420

orthographic projection, 423

paraperspective projection, 425

shape representation, 426

surface orientation and shape under perspective, 426

surface orientation and shape under orthography, 426

stereographic projection, 427

geometric properties of the perspective projection, 427

imaging with tenses, 430

photometric aspects of image formation, 430

9.5 recovery of image descriptions 431

edge detection, 431

differentiation approaches, 432

model-based approaches, 436

edge grouping and hough transform, 437

image segmentation, 438

9.6 shape from contour 440

qualitative analysis using edge labels, 441

quantitative analysis using skewed symmetries, 442

9.7 shape from shading 444

reflectance maps, 445

solving ill-posed problems, 448

photometric stereo, 449

9.8 shape from texture 450

density of textural elements, 450

textural reflectance maps, 451

9.9 stereo 453

addressing the correspondence problem, 453

intensity-based matching, 455

edge-based matching, 456

9.10 analysis of visual motion 457

motion fields, 458

motion field estimation, 460

motion field interpretation, 463

9.11 active vision 465

9.12 applications 466

autonomous vehicle navigation, 467

object recognition, 469

summary 471

background 474

exercises 476

lisp implementation: labeling polyhedral scenes

10 natural language processing

10.1 components of language 491

ccmtent and function words, 491

structure of phrases, 492

10.2 context-free grammars 493

parsing, 495

10.3 parsing context-free grammars 496

exploiting the lexicon, 498

building a parse tree, 499

10.4 grammars involving features 502

matching with features, 5o5

10.5 efficient parsing with charts 507

ambiguous sentences, 507

10.6 semantic interpretation 511

word senses, 512

semantic interpretation using features, 515

disambiguating word senses, 517

10.7 generating natural language 519

10.8 natural language in context 521

speech acts, 521

establishing reference, 522

handling database assertions and queries, 524

10.9 quantifier scoping 529

summary 530

background 530

exercises 531

lisp implementation: simple parser 533

bibliography

vocabulary index

code index

前言

his book is designed to introduce students to a set of theoretical and computational techniques that serve as a foundation for the study of artificial intelligence (Al). The presentation is aimed at students with a background in computer science at about the sophomore or junior level in college. The emphasis is on algorithms and theoretical machinery for building and analyzing AI systems. Traditional symbolic AI techniques such as deductive inference, game-tree search, and natural language parsing are covered, as are hybrid approaches such as those employed in neural networks, probabilistic inference, and machine vision. The coverage is broad, with selected topics explored in greater depth but with no attempt to exhaustively survey the entire field.

Representation

The book focuses on the importance of representation in the core chapters dealing with logic, search, and learning. It incorporates a more formal treatment of AI than is found in most introductory textbooks. This formal treatment is reflected in the attention given to syntax and semantics in logic and in the material concerning the computational complexity of AI algorithms.

The material on learning draws on recent unifying work in computational learning theory to explain a variety of techniques from decision trees to neural networks.

The book provides a consistent pedagogic example of AI in the real world through examples focusing on AI systems corresponding to robots and software automation "soffbots." A wide range of other examples are also introduced to characterize both the potential and the variety of Al applications. The chapters on natural language processing, planning, uncertainty, and vision supply a state-of-the-art perspective unifying existing approaches and summarizing challenging areas for future research.

This book is not meant as an exhaustive survey of AI techniques. Subjects such as qualitative reasoning about physical systems and analogical reasoning are only briefly touched on in this text. Other subjects are given much more attention in this book than in traditional texts. Learning, planning, and probabilistic reasoning are treated in some depth, reflecting their increased importance in the field. The chapter on vision (Chapter 9, Image Understanding) is substantial in its coverage of topics to reflect the importance of perception in understanding intelligence and building artifacts that interact with the world in useful and interesting ways.

Theory and Practice Although the text emphasizes theoretical foundations, practical problems involved with the implementation of AI algorithms are addressed in every chapter. A self-contained introduction to symbolic programming in Common Lisp is provided to encourage students to perform computational experiments. Lisp code is given for many of the important algorithms descried in the text; however, the text is designed so that the student can

ignore Lisp and implementation issues altogether ff he or she chooses. The code uses a carefully chosen subset of Common Lisp to teach algorithmic issues in AI. In contrast, other texts use AI algorithms to teach Lisp programming techniques.

All the algorithms in the text are described in English prose and pseudo code. In the case of algorithms that are also given in Lisp, most of the time the code appears in a Lisp Implementation appendix at the end of the chapter, but on some occasions it appears in the main body of the chapter. Code appears in the main body when it is considered particularly

important that the student explore the underlying issues empirically. With most of the Lisp code relegated to appendices, instructors are free to choose the areas they want to emphasize empirically. The tight coupling between the descriptions of algorithms in the text and the accompanying Lisp code makes it easy for students to experiment, without the bother of using two

texts with different perspectives and algorithmic approaches.

We use Lisp instead of Prolog because Lisp is closest m structure to languages such as Pascal and C that students me likely to be familiar with. We use Lisp instead of Pascal or C because the list processing and symbolic manipulation routines available in Lisp aUow for elegant implementations of important algorithms that can be compacdy listed. Note, however, that a library of C++ code is available (see the section on "Supplements") that mirrors the Common Lisp treatment in the text function for function and algorithm for algorithm.

To the Student Preface material is usually aimed at instructors who are thinking of adopting a text for a course. Generally, students cut straight to the first chapter or the table of contents to get some idea of what the book is about. Tkis book ts designed to teach students about the theory and practice of building computer programs that perform interesting and useful tasks. With the exception of some diversions in the introductory chapter, we leave the philosophical conundrums to the philosophers and focus on techniques, algorithms, and analytical tools that we believe students will find useful in building sophisticated (even intelligent) computer programs.

The book describes precise problems, analyzes them from a computational perspective, and specifies efficient algorithms for their solution where possible. Along the way, we provide the necessary logic, computer science, and mathematics for you to understand the important issues and ultimately develop your own solutions and propose your own problems. Our hope is that you will find the techniques and ideas in this book useful, whether you pursue a career in engineering, computer science, business management, or any other area that requires you to think in terms of computational proceases that have to interact with a complex and changing world.

To the Instructor The core material in the book is in the first five chapters covering basic introductory and motivational material, symbolic programming for courses interested in implementation, representation and logic, search, and learning. Within these core chapters, instructors have considerable flexibility regarding what to include and how much time to spend on particular topics.

The choice of topics and allocation of lecture time will depend on the background of the students taking the course. Often students have a reasonable background in boolean logic from a previous course in computer science, engineering, or mathematics, in which case the chapter on representation and logic can move along rather quickly. Search issues are generally

familiar to computer science students, and the basic blind-search methods, including depth-first and breadth-first search, should take very little time.

Figure 1 This graph illustrates some of the dependencies and connections among the chapters in this text. A solid line indicates a strong dependency between two chapters, and a dashed line indicates a connection or conditional dependency between two chapters that an instructor may wish to consider.

We recommend spending a significant amount of time on learning since the area is reasonably mature, the issues dramatically illustrate the role of representation and search, and students are generally fascinated with the prospect of building systems that learn.

Representation appears before search in the order of chapters because representation is the more fundamental idea as far as Al is concerned. We emphasize logic because it enables students to think precisely about representation. Pedagogically, there are situations that merit covering search before representation; if you are teaching Lisp and this is the students' first exposure to Lisp and symbolic programming, consider covering the search chapter first because the examples of search procedures provide a somewhat easier introduction to Lisp programming issues. With the exception of the section on 'discrimination networks at the end of the search chapter, the representation and search chapters can be covered in either order.

. Figure 1 illustrates some of the dependencies and connections among the chapters in this text. A sohd line indicates that one chapter should be covered before another chapter in order to fully understand all the material.

A dashed line indicates a connection between two chapters that an instructor may wish to emphasize or a conditional dependency that an instructor may wish to account for. For example, the section on spatial representation and robot navigation in Chapter 6 (Advanced Representation) can be used to motivate and set the stage for topics covered in Chapter 9 (Image Understanding). All the chapters are conditionally dependent on Chapter 2 (Symbolic Programming) if implementation issues are to be covered and the students require instruction in symbolic programming methods. Additional information regarding chapter dependencies and synergies as well as suggestions for course syllabi are available ill the Instructors Resource Guide (see the section on "Supplements").

Supplements

Source supplemental materials for this book are available via anonymous FTP from aw.c0m in the subdirectory aw/dean. The supplemental materials for the book include the following items:

Instructor's Guide and Solutions Manual---contain notes on each chapter,

solutions to selected exercises, additional exercises for the Lisp (Chapter 2) and vision (Chapter 9) chapters, and sample exams with answers. This guide is available only on a disk from the publisher (ISBN 32548-4).

Selected figures---selected figures in encapsulated PostScript format are available for overhead transparencies.

Source code the sample source code contained in the text is available in both Lisp and C++ implementations. Other implementations may be available (for example, the Scheme dialect of Lisp); check the README file in bc/dean for current status and recent developments.

To obtain the supplemental materials, FTP to bc.aw.com as follows:%ftp av.com

and log on as anonymous. Use your electronic mail address as your password and connect to the directory for this book by typing % cd av/dean Before retrieving supplements, it is a good idea to look at the REAl)ME file to see if changes have been made since this book went to press. You can retrieve this file by typing % get README Type quit to exit FIT' and read the README file. (Although you could read the file online, it is courteous not to load the FTP server while you are just reading.) Then log back on when you are ready to download the files that you want. Using FTP to retrieve archived files can get complicated. The REAl)ME file will give you some additional advice, but you may find it helpful to consult your favorite UNIX guide or local artwork wizard.

Thanks

We benefited from the efforts of many previous authors in organizing the material covered in this text and figuring out how to present the material to students. Irt particular, we would like to acknowledge the texts by Chamiak and McDermott [1985], Nilsson [1980], and Winston [1979] that

provided our first introductions to AI. We are also indebted to Davis [1990], Genesereth and N'dsson [1987], Ginsberg [1993], and Winston [1992], whose books we consulted while writing this text.

A lot of friends and colleagues contributed to this book by providing feedback to the authors during crucial times during its writing. In particular, we would like to thank the following people:

Mark Boddy
Robert Goldman Bart Selman

Chris Brown
Steve Hanks
Jude Shavlik

Jack Breese
Eric Horvitz
Yoav Shoham

Eugene Chamiak Leslie Kaelbling Mike Shwe

Ernie Davis
Paris Kanellakis Austin Tate

Mark Drurmnond Richard .Korf
Prasad Tadepalli

Charles Dyer
Tom Mitchell
Dan Weld

Nort Fowler
Ann Nicholson
Mike Wellman

The production and editorial help from Addison-Wesley Publishing Company was exceptional. We would like to thank Carter Shanklin, the Acquisitions Editor for Computer Science at Addison-Wesley, for shepherding us through the entire process. Judith Hibbard, the Senior Production Editor, Melissa Standen, the Editorial Assistant, and Ari Davidow, the Technical Production Assistant, were amazingly helpful at all the right times. We would also like to thank Sara Larson from the University of Maryland at College Park, who helped with the vision chapter, and Mary Andrade and Dawn Nichols from Brown University, who provided invaluable assistance in keeping track of the many drafts and handling the extensive correspondence that was required.

A large number of students also provided feedback while we were writing the text, far too many to list, but the following deserve special mention:

Scott Baetz
Michael Littman

Ted Camus
Mike Perkowitz

Lloyd Greenwald Kostadis Roussos

Shieu-Hong Lin
Smith Surasmith

A special thanks goes to Jon Monsarrat for his unflagging enthusiasm for the project and for the many hours he spent hacking UNIX, PostScript, and LATEX. Finally, each of us would like to thank our respective spouses and loved ones for being supportive when it was needed most.

Thomas Dean

James Allen

Yiannis Aloimonos