Why AI must try to model creativity
Creativity is a fundamental feature of human intelligence, and an inescapable challenge
for AI. Even technologically oriented AI cannot ignore it, for creative programs could be
very useful in the laboratory or the market-place. And AI-models intended (or considered)
as part of cognitive science can help psychologists to understand how it is possible for
human minds to be creative.
Creativity is not a special “faculty”, nor a psychological property confined to a tiny elite.
Rather, it is a feature of human intelligence in general. It is grounded in everyday capacities
such as the association of ideas, reminding, perception, analogical thinking, searching
a structured problem-space, and reflective self-criticism. It involves not only a cognitive
dimension (the generation of new ideas) but also motivation and emotion, and is closely
linked to cultural context and personality factors . Current AI models of creativity focus
primarily on the cognitive dimension.
A creative idea is one which is novel, surprising, and valuable (interesting, useful,
beautiful. .). But “novel” has two importantly different senses here. The idea may be novel
with respect only to the mind of the individual (or AI-system) concerned or, so far as we
know, to the whole of previous history. The ability to produce novelties of the former kind
may be called P-creativity (P for psychological), the latter H-creativity (H for historical).
P-creativity is the more fundamental notion, of which H-creativity is a special case.
00043702/98/$ -see front matter 0 1998 Elsevier Science B.V. All rights reserved.
HI: SOOO4-3702(98)00055- 1
348 A4.A. Buden /Artijicial Intelli~enc-e 103 (1998) 347-356
AI should concentrate primarily on P-creativity. If it manages to model this in a powerful
manner, then artificial H-creativity will occur in some cases-indeed, it already has, as we
shall see. (In what follows, I shall not use the letter-prefixes: usually, it is P-creativity which
is at issue.)
- Three types of creativity https://pcsite.co.uk
There are three main types of creativity, involving different ways of generating the novel
ideas. Each of the three results in surprises, but only one (the third) can lead to the “shock’
of surprise that greets an apparently impossible idea . All types include some H-creative
examples, but the creators celebrated in the history books are more often valued for their
achievements in respect of the third type of creativity.
The first type involves novel (improbable) combinations of familiar ideas. Let us
call this “combinational” creativity. Examples include much poetic imagery, and also
analogy-wherein the two newly associated ideas share some inherent conceptual
structure. Analogies are sometimes explored and developed at some length, for purposes
of rhetoric or problem-solving. But even the mere generation, or appreciation, of an apt
analogy involves a (not necessarily conscious) judicious structural mapping, whereby the
similarities of structure are not only noticed but are judged in terms of their strength and
The second and third types are closely linked, and more similar to each other than
either is to the first. They are “exploratory” and “transformational” creativity. The former
involves the generation of novel ideas by the exploration of structured conceptual spaces.
This often results in structures (“ideas”) that are not only novel, but unexpected. One can
immediately see, however, that they satisfy the canons of the thinking-style concerned.
The latter involves the transformation of some (one or more) dimension of the space,
so that new structures can be generated which could not have arisen before. The more
fundamental the dimension concerned, and the more powerful the transformation, the more
surprising the new ideas will be. These two forms of creativity shade into one another, since
exploration of the space can include minimal “tweaking” of fairly superficial constraints.
The distinction between a tweak and a transform is to some extent a matter of judgement,
but the more well-defined the space, the clearer this distinction can be.
Many human beings-including (for example) most professional scientists, artists,
and jazz-musicians-make a justly respected living out of exploratory creativity. That
is, they inherit an accepted style of thinking from their culture, and then search it,
and perhaps superficially tweak it, to explore its contents, boundaries, and potential.
But human beings sometimes transform the accepted conceptual space, by altering or
removing one (or more) of its dimensions, or by adding a new one. Such transformation
enables ideas to be generated which (relative to that conceptual space) were previously
The more fundamental the transformation, and/or the more fundamental the dimension
that is transformed, the more different the newly-possible structures will be. The shock
of amazement that attends such (previously impossible) ideas is much greater than the
surprise occasioned by mere improbabilities, however unexpected they may be. If the
M.A. &den /Artificial Intelligence 103 (1998) 347-356 349
transformations are too extreme, the relation between the old and new spaces will not
be immediately apparent. In such cases, the new structures will be unintelligible, and very
likely rejected. Indeed, it may take some time for the relation between the two spaces to be
recognized and generally accepted.
- Computer models of creativity
Computer models of creativity include examples of all three types. As yet, those
focussed on the second (exploratory) type are the most successful. That’s not to say
that exploratory creativity is easy to reproduce. On the contrary, it typically requires
considerable domain-expertise and analytic power to define the conceptual space in the first
place, and to specify procedures that enable its potential to be explored. But combinational
and transformational creativity are even more elusive.
The reasons for this, in brief, are the difficulty of approaching the richness of human
associative memory, and the difficulty of identifying our values and of expressing them in
computational form. The former difficulty bedevils attempts to simulate combinational
creativity. The latter difficulty attends efforts directed at any type of creativity, but is
especially problematic with respect to the third (see Section 4, below).
Combinational creativity is studied in AI by research on (for instance) jokes and analogy.
Both of these require some sort of semantic network, or inter-linked knowledge-base, as
their ground. Clearly, pulling random associations out of such a source is simple. But an
association may not be telling, or appropriate in context. For all combinational tasks other
than “free association”, the nature and structure of the associative linkage is important too.
Ideally, every product of the combinational program should be at least minimally apt, and
the originality of the various combinations should be assessable by the AI-system.
A recent, and relatively successful, example of AI-generated (combinational) humour is
Jape, a program for producing punning riddles [I]. Jape produces jokes based on nine
general sentence-forms, such as: What do you get when you cross X with Y?; What
kind of X has Y?; What kind of X can Y?; What’s the difference between an X and
a Y? The semantic network used by the program incorporates knowledge of phonology,
semantics, syntax, and spelling. Different combinations of these aspects of words are used.
in distinctly structured ways, for generating each joke-type.
Examples of riddles generated by Jape include: (Q) What kind of murderer has fibre?
(A) A cereal killer; (Q) What do you call a strange market? (A) A bizarre bazaar; (Q) What
do you call a depressed train? (A) A low-comotive; and (Q) What’s the difference between
leaves and a car? (A) One you brush and rake, the other you rush and brake. These may
not send us into paroxysms of laughter-although, in a relaxed social setting, one or two
of them might. But they are all amusing enough to prompt wryly appreciative groans.
Binsted did a systematic series of psychological tests, comparing people’s reception
of Jape’s riddles with their response to human-originated jokes published in joke-books.
She also compared Jape’s products with “non-jokes” generated by random combinations.
She found, for instance, that children, by whom such humour is most appreciated, can
distinguish reliably between jokes (including Jape’s riddles) and non-jokes. Although they
generally find human-originated jokes funnier than Jape’s, this difference vanishes if Jape’s
350 M.A. Boden /Art$cial Intelligence 103 (1998) 347-356
output is pruned, SO as to omit the items generated by the least successful schemata. The
riddles published in human joke-books are highly selected, for only those the author finds
reasonably funny will appear in print.
Binsted had set herself a challenging task: to ensure that every one of Jape’s jokes
would be amusing. Her follow-up research showed that although none were regarded as
exceptionally funny, very few produced no response at all. This contrasts with some other
Al-models of creativity, such as AM [ 161, where a high proportion of the newly generated
structures are not thought interesting by human beings.
It does not follow that all Al-modelling of creativity should emulate Binsted’s ambition.
This is especially true if the system is meant to be used interactively by human beings,
to help their own creativity by prompting them to think about ideas that otherwise they
might not have considered. Some “unsuccessful” products should in any case be allowed,
as even human creators often produce second-rate, or even inappropriate, ideas. Jape’s
success is due to the fact that its joke-templates and generative schemata are very limited.
Binsted identifies a number of aspects of real-life riddles which are not parallelled in Jape,
and whose (reliably funny) implementation is not possible in the foreseeable future. To
incorporate these aspects so as to produce jokes that are reliably funny would raise thorny
questions of evaluation (see Section 4).
As for AI-models of analogy, most of these generate and evaluate analogies by using
domain-genera1 mapping rules, applied to prestructured concepts (e.g. [7,12,13]). The
creators of some of these models have compared them with the results of psychological
experiments, claiming a significant amount of evidence in support of their domain-general
approach . In these models, there is a clear distinction between the representation of
a concept and its mapping onto some other concept. The two concepts involved usually
remain unchanged by the analogy.
Some AI-models of analogy allow for a more flexible representation of concepts.
One example is the Copycat program, a broadly connectionist system that looks for
analogies between alphabetic letter-strings [ 11,181. Copycat’s concepts are contextsensitive descriptions of strings such as “mmpprr” and “klmmno”. The two m’s in the
first string just listed will be described by Copycat as a pair, but those in the second string
will be described as the end-points of two different triplets.
One might rather say that Copycat will “eventually” describe them in these ways. For
its concepts evolve as processing proceeds. This research is guided by the theoretical
assumption that seeing a new analogy is much the same as perceiving something in a new
way. So Copycat does not rely on ready-made, fixed, representations, but constructs its own
in a context-sensitive way: new analogies and new perceptions develop together. A partbuilt description that seems to be mapping well onto the nascent analogy is maintained,
and developed further. One that seems to be heading for a dead end is abandoned, and
an alternative begun which exploits different aspects. The model allows a wide range
of (more or less daring) analogies to be generated, and evaluated. The degree to which
the analogies are obvious or far-fetched can be altered by means of one of the systemparameters.
Whether the approach used in Copycat is preferable to the more usual forms of (domaingeneral) mapping is controversial. Hofstadter [ 1 l] criticizes other AI-models of analogy
for assuming that concepts are unchanging and inflexible, and for guaranteeing that the
M.A. Boden /Artificial Intelligence 103 (1998) 347-356 351
required analogy (among others) will be found by focussing on small representations
having the requisite conceptual structures and mapping rules built in. The opposing camp
rebut these charges .
They argue that to identify analogical thinking with high-level perception, as Hofstadter
does, is to use a vague and misleading metaphor: analogical mapping, they insist, is
a domain-general process which must be analytically distinguished from conceptual
representation. They point out that the most detailed published account of Copycat
[ 181 provides just such an analysis, describing the representation-building procedures as
distinct from, though interacting with, the representation-comparing modules. They report
that the Structure Mapping Engine (SME), for instance, can be successfully used on
representations that are “very large” as compared with Copycat’s, some of which were
built by other systems for independent purposes. They compare Copycat’s alphabetic
microworld with the “blocks world” of 1970s scene analysis, which ignored most of
the interesting complexity (and noise) in the real-world. Although their early models
did not allow for changes in conceptual structure as a result of analogising, they refer
to work on learning (using SME) involving processes of schema abstraction, inference
projection, and re-representation . Moreover (as remarked above), they claim that their
psychological experiments support their approach to simulation. For example, they say
there is evidence that memory access, in which one is reminded of an (absent) analog,
depends on psychological processes, and kinds of similarity, significantly different from
those involved in mapping between two analogs that are presented simultaneously.
The jury remains out on this dispute. However, it may not be necessary to plump
absolutely for either side. My hunch is that the Copycat approach is much closer
to the fluid complexity of human thinking. But domain-general principles of analogy
are probably important. And these are presumably enriched by many domain-specific
processes. (Certainly, psychological studies of how human beings retrieve and interpret
analogies are likely to be helpful.) In short, even combinational creativity is, or can be, a
highly complex matter.
The exploratory and transformational types of creativity can also be modelled by
AI-systems. For conceptual spaces, and ways of exploring and modifying them, can be
described by computational concepts.
Occasionally. a “creative” program is said to apply to a wide range of domains, or
conceptual spaces-as EURISKO. for instance, does . But to make this generalist
program useful in a particular area, such as genetic engineering or VLSI-design,
considerable specialist knowledge has to be provided if it is not to generate hosts of
nonsensical (as opposed to merely boring) ideas. In general, providing a program with
a representation of an interesting conceptual space, and with appropriate exploratory
processes, requires considerable domain-expertise on the part of the programmer-or
at least on the part of someone with whom he cooperates. (Unfortunately, the highly
subject-bounded institutional structure of most universities works against this sort of