UnderstandingComputersAndCognition presents the premise that by challenging and redirecting the ideas of the Rationalist Tradition, a new paradigm (which they do not name) of "design" (design understood very broadly) emerges, one that allows programs to better fit their users' needs. Important ideas. A struggle to read through but value also, in spades. The book is inside-out, upside-down and back-to-front, so that what is clearly a very valuable contribution to the fields of BPR, systems analysis and few others (like software development methodologies) looks trivial when you get there.
Longer, collective review below . . .
UnderstandingComputersAndCognition presents the premise that by challenging and redirecting the ideas of the Rationalist Tradition, a new paradigm (which they do not name) of "design" (design understood very broadly) emerges, one that allows programs to better fit their users' needs.
Winograd and Flores begin by introducing the idea that there is such a thing as a cultural tradition, within which intellectual endeavors take place. And the dominant tradition in the West is the Rationalistic Tradition which includes the idea that cognition consists in manipulating representations of the world. This is also at the root of the dominant view of artificial intelligence, and of what computer programs do in general: it's all about manipulating abstract, formal (yet information preserving) representations of the world. Although they never flat out say it, the model of cognition (and more) they develop is proposed as a more interesting, accurate and useful metaphor for:
The book is organized in three parts. In Part I, Flores and Winograd focus on their model of cognition & experience vs. the Rationalist Tradition. Then they explore in Part II and Part III two almost independent consequences of their model of cognition: AI research and business process automation. Comments from reading the book are almost equal parts appreciation for the insights and frustration with the presentation.
About Part I
Part I establishes a description of cognition (with some epistemology, ontology and metaphysics thrown in) with the intent of addressing the use of computers through this description. Sources for this description include:
These mental fireworks gain concrete form in the description of Searle's Speech Act Theory which presents a model for "what's actually going on when we talk if we aren't coordinating compatible internal representations of the world." The structure of speech acts is presented in a (slightly clumsy) state transition diagram, as exactly the kind of formal model easily represented by computing systems. (If communication can be formally modeled, what happens if we implement this model? - Foreshadowing, your mark of good, quality literature - ed.) The description of a "conversation for action" (and its reference to "conditions of satisfaction", the same phrase Werth uses in HighProbabilitySelling), was valuable in and of itself.
Part I is a slog in part because, despite Winograd and Flores' insistence that the book isn't a philosophical treatise and isn't a work of high scholarship, it is structured like a paper for a review journal and not a very good one. Reading any couple page chunk one feels whipped around among concept vertigo, ungodly pretension, and tedious repetition. Along the way the promises that this, that or the other head-spinning concept glossed over here will be "dealt with in detail in Chapter 12" stack up to take on an ominous tone. Interestingly, different readers have greater or lesser challenges with different chapters:
Despite the presentation, the ideas introduced along the way are intriguing, illuminating, and relevant. Several readers have begun personal investigations prompted by reading this book, in addition to finding connections to their own experience or practice:
The combination of scholarly references and models gives a theoretical grounding and (ironically) an analytical, model-based ("rationalist" even) presentation to ideas of unconscious but skillful action that are presented experientially by sources as diverse as:
About Part II
Part II is a brief review of the state of the AI art as of the mid 1980's. It's mainly a story of failure and frustration. Even Winograd's own SHRDLU isn't viewed as much to write home about. The prediction is made that industrial applications of AI will devolve to highly focused uses of individual techniques, none of which add up to anything like the grand vision of generalized artificial intelligence operationally similar to humans. As of 2003, this prediction seems to have come true: these days AI means Autonomy, means neural networks and Bayesian analysis for very limited forms of decision support.
The crucial, inevitable flaw of the classic AI programs, of course, is that they all (even SHRDLU) work by having a bunch of knowledge (assumptions, really, leading to Heideggerian "blindness") about a microworld built into them by the programmer, the much maligned internal representation. They don't work because they model a way of understanding that isn't the way humans function in the world most of the time.
This failure begs the question of whether computer systems can be organized to themselves function in a state of "Being In The World" and experience "thrownness" as an opportunity to expand their own functioning, perhaps in their own way, or perhaps via mechanisms similar to those by which humans reflect. For AI research it may well be necessary to "let the creation go" and realise that the internal structure of the system will have to form itself if we want it to have some "real" understanding of the world. That particular question points to perhaps the most insightful unspoken observation in Part I - that introspection and model formation is an unusual cognitive process, not the one where humans spend most of their time and make most of their meaning.
About Part III
Part III begins with a pretty excoriating analysis of traditional decision support systems, maintaining that the Rationalist Tradition model of decision as optimum choice between alternative actions (based on an internal representation) is not only wrong but misleading. That along with the authority that folks tend to vest in IT systems means that systems which reinforce this model are unhelpful, even dangerous.
Then comes the punchline. Seems that in the early 1980's W+F founded a company, "Action Technologies Inc." (http://actiontech.com/) to sell a product, "Coordinator", a business process tool based upon the principles of speech act theory. It's still going, impressive list of clients, too. Curiously, searching their site for "Heidegger" results in no matches ;-) The site is steeped in the language you'd expect from a BPR consultancy, even making heavy use of the "decision" word. Interestingly, Action Technologies' site also has plenty of instances of "efficiency" on it, when W+F are fairly clear that what's most important in management is effectiveness.
Of course, "efficiency" creates less "thrownness" in talking business, so may provide greater "coupling" with their "medium" of potential clients within the "consensual domain" of "business", leading to more productive "conversations for action" without necessarily introducing distracting "thrownness" into the process. "Efficiency" is also an easier "condition of satisfaction" to propose and verify vs. "effectiveness." Everything in quotes in the immediately preceding two sentences is developed in the earlier parts of W & F's book.
Coordinator embodies W+F principle claim about what computers do best for people: act as structured media for conversations. In this sense, Coordinator has a remarkably similar antecedent, in the work management systems implemented by Cyprus Semiconductor as described in NoExcusesManagement? by TJRogers. The action / commitment systems at Cyprus Semiconductor can be seen as a special purpose tool for managing the specific conversations for action that in Rogers' view make up a semiconductor company. The Coordinator, provides general support for the kinds of conversations that, as part of the "thrownness" (Heidegger concept - ed.) of being a manager are the real activity of management.
There's a raft of interesting points and ideas presented in the "Design Example" of section 12.2, not that it contains anything that the average IT practitioner would recognize as "design", more what happens in the traditional analysis phase, plus some philosophical musings. They resonate very strongly with the view of analysis, design, and programming that emphasizes behavior over state. There are maybe also some parallels with TestDrivenDevelopment and iteramental/spiratrip/incrative processes. Indeed, any methodology can be seen as embodying a restricted version of the speech act model, with the specific artifacts and processes being attempts at implementing the speech act cycle within a restricted domain of practice. As a side note, we might wonder if Wiki's, intranets and workflow systems are not coupled to these ideas too, since when used properly they also are structured media for conversation. (Boy, when all you have is a new paradigm, everything looks like a nail.)
At a minimum, identifying that conversations and agreements are what make business go provides a different domain for automation, and a different, additional consideration for automation that doesn't directly support "conversations." F & W take on using computers in business is based on an explicit theory of operation for business. Lots of use of computers (and other instruments) is based on an implicit theory of operation which because it is implicit is difficult to test or refine. Independent of the metaphor they propose for how people work, the idea that an instrument like a computer is useful in terms of a theory of operations is valuable in itself.
Important ideas. A struggle to read through but value also, in spades. The book is inside-out, upside-down and back-to-front, so that what is clearly a very valuable contribution to the fields of BPR, systems analysis and few others (like software development methodologies) looks so trivial (and almost like a non-sequitur) after all the intellection machismo of the earlier parts of the book.
Despite the difficulty in getting through the book, and the irritating incompleteness of F & W development of the implications of their premise, the payoff is there. The references they drag in have enough value to justify working through the book. In addition, this model has implications in the systems we choose to develop, how we choose to develop them, and indeed, much more well beyond computers and cognition. Someone should re-write this book in the pyramid lead form (which this review almost is.)
Referenced in the bookshelved discussion: