Drawing on the ambivalence of software's existence—both concrete and abstract—, as well as on the various way that software is a complex cognitive object to grasp, we now investigate the means deployed to render it meaningful to an individual. As we have seen in empirical studies, programmers resort to textual perusing in order to build up mental models.
In this section, we look at the particular syntactic tokens that are used to metaphorically convey the meaning of a computational element, as well as the medium through which the medium is perused—via integrated development environments. This will conclude our inquiry into software's complexities and into how metaphors and textual manipulation facilitate the construction of mental models, before we inquire specifically about the ways in which aesthetics play a role in this process.
Metaphors in computation
Our understanding of metaphors relies on the work of George Lakoff and Mark Johnson89due to their requalification of the nature and role of metaphor beyond an exclusively literary role. While Lakoff and Johnson's approach to the conceptual metaphor will serve a basis to explore these linguistic devices as a cognitive means across software and narrative, we also argue that Ricoeur's focus on the tension of the statement rather than primarily on the word will help us better understand some of the aesthetic manifestations and workings of software metaphors. Following a brief overview of their contributions, we then examine the various uses of metaphor in software, from end-users to programmers.
We start from from the most commonly used definition of metaphor: that of labeling one thing in terms of another, thereby granting additional meaning to the subject at hand. Our approach here will also bypass some of the more minute distinctions of literary devices made between metonymy (in which the two things mentioned are already conceptually closely related), comparison (explicitly assessing differences and similarities between two things, often from a value-based perspective) and synechdoche (representing a whole by a subset), as we consider these all subsets of the class of metaphors.
Lakoff and Johsnon's seminal work develops a theory of conceptual metaphors by highlighting their essential dependence on pre-existing cognitive structures, which we associate with already-understood concepts. The metaphor maps a source domain (made up of cognitive structure(s)) to a target domain. In the process, they extend the field of applicability of metaphors from the strictly literary to the broadly cultural: metaphors work because each of us has some conception of those domains involved in the metaphorical process.
Metaphors rely in part on a static understanding, resulting in a fixed meaning from the application of a given source to a given target, but which can nonetheless suggest the property of dynamic evolution. These source cognitive structures possess schemas , which are defined enough to not be mistaken for something else, but broad enough to allow for multiple variants of itself to be applied to various targets, providing both reliability and diversity (
Metaphors We Live By by George Lakoff, Mark Johnson, 1980.
. As we will see below, their approach allows us to focus not just on textual objects, but on the vast range of metaphors used also in computing-related environments. Given that the source of the metaphor should be well-grounded, with as little invariablity as possible, in order to qualify a potentially ill-defined target domain, we see how this is a useful mechanism to provide an entrypoint to end users and novice programmers to grasp new or foreign concepts.
Starting with the role of metaphors manifested in expressions such as the desktop , the mouse , or the cloud for end-users, we will then turn to the programmers relationships to their environment as understood metaphorically. The relationship between poetic metaphor and source code will be developed inAbstraction and metaphors
, we will see that metaphor-induced tensions can be a fertile ground for poetic creation through aesthetic manifestations.
Metaphors for end-users
It is interesting to consider that the first metaphor in computing might be concommitant with the first instance of modern computing—the Turing machine . While Turing machines are widely understood as being manifested into what we call digital computers (laptops, tablets, smartphones, etc.), and thus definitely within the realm of mechanical devices, the Turing machine is not strictly a machine per se . Rather, it is more accurately defined as a mathematical model which defines an abstract machine. Indeed, as we saw inSoftware ontology
, computers cannot be proven or assumed to be machines, because their terminology comes from logic, textual, or discursive traditions (e.g. reference, statement, names, recursion, etc.) and yet they are still built (
On the Origin of Objects by Brian Cantwell Smith, 1998.
. Humans can be considered Turing machines (and, in fact, one of the implicit requirements of the Turing machine is that, given enough time and resources, a human should be able to compute anything that the Turing machine can compute), and non-humans can also be considered Turing machines90. Debates in computer science related to the nature of computing (
Philosophy of Computer Science: An Introductory Course by William J. Rapaport, 2005.
have shown that computation is far from being easily reduced to a simple mechnical concern, and the complexity of the concept is perhaps why we ultimately revert to metaphors in order to better grasp them.
As non-technical audiences came into contact with computation through the advent of the personal computer, these uses of metaphors became more widespread and entered public discourse once personal computing became available to ever larger audiences. With the release of the XEROX Star, features of the computer which were until then described as data processing were given a new life in entering the public discourse. The Star was seminal since it introduced technological innovations such as a bitmapped display, a two-button mouse, a window-based display including icons and folders, called a desktop. In this case, the desktop metaphor relies on previous understanding of what a desktop is, and what it is used for in the context of physical office-work; since early personal computers were marketed for business applications, these metaphors built on the broad cognitive structures of the user-base in order to help them make sense of this new tool.
Paul DuGay, in his cultural study of the Walkman, makes a similar statement when he describes Sony's invention, a never-before-seen compound of technological innovations, in terms of pre-existing, and well-established technologies (
Doing Cultural Studies: The Story of the Sony Walkman by Paul Gay, Stuart Hall, Linda Janes, Anders Koed Madsen, Hugh Mackay, Keith Negus, 2013.
. The icon of a floppy disk for writing data to disk, the sound of wrinkled paper for removing data from disk, the designation of a broad network of satellite, underground and undersea communications as a cloud, these are all metaphors which help us make a certain sense of the broad possibilities brought forth by the computing revolution (
Danger! Metaphors at Work in Economics, Geophysiology, and the Internet by Sally Wyatt, 2004. [link]
. Even the clipboard , presented to the user to copy content across applications, does not believe at all like a real clipboard (
How the clipboard works by Hugo Osvaldo Barrera, 2022. [link]
The work of metaphors takes on an additional dimension when we introduce the concept of interfaces. As permeable membranes which enable (inter)actions between the human and the machine, they are essential insofar as they render visible, and allow for, various kinds of agency, based on different degrees of understanding. Departing from the physically passive posture of the reader towards an active engagement with a dynamic system, interfaces highlight even further the cognitive and (inter)active role of the metaphor.
These depictions of things-as-other-things influence the mental model which we build of the computer system we interact with. For instance, the prevalent windows metaphor of our contemporary desktop and laptop environments obfuscates the very concrete action of the CPU (or CPUs, in the case of multi-core architecture) of executing one thing at a time, except at speeds which cannot be intuitively grasped by human perception. Alexander Galloway 's work on interfaces as metaphorical representations suggests a similar concern of obfuscation, as he recall Jameson's theory of cognitive mapping. Jameson uses it in a political and historical context, defining that a cognitive mapping is a " "a situational representation on the part of the individual subject to that vaster and properly unrepresentable totality which is the ensemble of society's structures as a whole" " (
Postmodernism Or, The Cultural Logic Of Late Capitalism by Fredric Jameson, 1991.
), insofar as a cognitive map is necessary to deploy agency in a foreign spatial environment, an environment which Jameson associates with late capitalism.
Galloway productively deploys this heuristic in the context of interfaced computer work: cognitive mapping is the process by which the individual subject situates himself within a vaster, unrepresentable totality, a process that corresponds to the workings of ideology91. Here, we can see how metaphors can act as both cognitive tools to make sense of objects, but also as obfuscating devices to cloak the reality of the environment92. The cognitive processes enable by metaphors help provide a certain sense of the unthinkable, of that which is too complex to grasp and therefore must be put into symbols (words, icons, sounds, etc.).
Nielsen and Gentner develop on some challenges that arise when one uses metaphors not just for conceptual understanding, but for further conceptual manipulation. In The Anti-Mac Interface , they point out that differences in features between target domain and source domain are inevitable. For instance, a physical pen would be able to mark up any part of a physical form, whereas a tool symbolize by a pen icon on a document editing software might restrict an average user to specific fields on the form. Their study leads to assess alternatives to one kind of interface93, in order to highlight how a computer system with similar capabilities (both being Turing-complete machines), could differ in (a) the assumptions made about the intent of the user, (b) the assumptions made about the expertise level of the user and (c) the means presented to the user in order to have them fulfill their intent (
The Anti-Mac interface by Don Gentner, Jakob Nielsen, 1996.
Moving away from userland , in which most of these metaphors exist, we now turn to examine the kinds of metaphors that are used by programmers and computer scientists themselves. Since the sensual reality of the computer is that it is a high-frequency vibration of electricity, one of the first steps taken to productively engage with computers is to abstract it away. The word computer itself can be considered as an abstraction: originally used to designate the women manually inputting the algorithms in room-scale mainframes, the distinction between the machine and its operator was considered to be unnecessary. The relation between metaphor and abstraction is a complex one, but we can say that metaphorical thought requires abstraction, and that the process of abstraction ultimately implies designating one thing by the name of another (a woman by a machine's, or a machine by a woman's), being able to use it interchangeably, and therefore lowering the cognitive friction inherent to the process of specification, freeing up mental resources to focus on the problem at hand (
On Software, or the Persistence of Visual Knowledge by Wendy Hui Kyong Chun, 2005.
Metaphors are implicitly known not to be true in their most literal sense. Max Black in Models and Metaphors argues that metaphors are too loose to be useful in analytic philosophy but, like models they help make concepts graspable and render operation to the computer conceivable, independently of the accuracy of the metaphor to depict the reality of the target domain.
Abstraction, metaphors and symbolic representations are therefore used tools when it comes to understanding some of the structures and objects which constitute computing and software, in terms of trying to represent to ourselves what it is that a computer can and effectively does, and in terms of explaining to the computer what it is we're trying to operate on (from an integer, to a non-ASCII word, to a renewable phone subscription or to human language).
When they concern the work of programmers, these tools deployed during the representational process differ from conventional or poetic metaphors insofar as they imply some sort of productive engagement and therefore empirically verifiable or falsifiable. These models are means through which we aim at constructing the conceptual structures on which metaphors also operate, and explicit them in formal symbol systems, such as programming languages.
Programmers, like users, also rely heavily on metaphors to project meaning onto the entities that they manipulate. Fundamentally, the work of these metaphors are not different from the ones that operate in the public discourse, or at the graphical interface level; nonetheless, they show how they permeate computer work in general, and source code in particular.
Perhaps one of the first metaphors a programmer encounters when learning about the discipline is the one stating that a function is like a kitchen recipe: you specify a series of instructions which, given some input ingredients (arguments), result in an output result (return value). However, the recipe metaphor does not allow for an intuitive grasping of overloading , the process through which a function can be called the same way but do things with different inputs. Similarly, the use of the term server is conventionally associated and represented as a machine sending back data when asked for it, when really it is nothing but an executed script or process running on said machine.
Another instance of symbolic use relying on metaphorical interpretation can be found in the word stream . Originally designating a flow of water within its bed, it has been gradually accepted as designating a continuous flow of contingent binary signs. Memory , in turn, stands for record, and is stripped down of its essentially partial, subjective and fantasized aspects usually highlighted in literary works (perhaps volatile memory gets closer to that point). Finally, objects , which came to prominence with the rise of object-oriented programming, have only little to do with the physical properties of objects, with no affordance for being traded, for acting as social symbols, for gaining intrinsic value, but rather the word is used as such for highlighting its boundedness, states and actions, and ability to be manipulated without interfering with other objects94. We can see one of those computational concepts of scale, involving macro , global , extend , monolith , bloat , etc. That being said, programmer-facing metaphors tend to be less systematic than user-facing, highlighting the complexity of making the nature of software explicit, and the ad hoc nature of some of the terms used to describe parts of a computational system.
Most of these designations, stating a thing in terms of another aren't metaphors in the full-blown, poetic sense, but they do, agains, hint at the need to represent complex concepts into humanly-graspable terms, what Paul Fishwick calls text-based aesthetics (
UNKNOWN AUTHOR, 2006)
Aesthetic Computing by Paul A. Fishwick, 2006.
. The need for these is only semantic insofar as it allows for an intended interaction with the computer to be carried out successfully—e.g. one has an intuitive understanding that interrupting a stream is an action which might result in incompleteness of the whole. This process of linguistic abstraction doesn't actually require clear definitions for the concepts involved. For instance, example of the terminology in modern so-called cloud computing uses a variety of terms stacked up to each other in what might seem to have no clear denotative meaning (e.g. Google Cloud Platform offers Virtual machine compute instances ), but nonetheless have a clear operative meaning (e.g. the thing on which my code runs). This further qualifies the complexity of the sense-making process in dealing with computers: we don't actually need to truly understand what is precisely meant by a particular word, as long as we use it in a way which results in the expected outcome95. That being said, there is a certain correlation between skills and metaphors: the more skilled a programmer is, the less they resort to metaphors and they more they consider things "as they are" (
Knowledge organization and skill differences in computer programmers by Katherine B. McKeithen, Judith S. Reitman, Henry H. Rueter, Stephen C. Hirtle, 1981.
This need to re-present the specificities of the machines has also been one of the essential drives in the development of programming languages. Since we cannot easily and intuitively deal with binary notation to represent complex concepts, programming helps us deal with this hurdle by presenting things in terms of other things. Most fundamentally, programming languages represent binary signs in terms of English language (e.g. from binary to Assembly, seeLevels of software
). This is, again, by no means a metaphorical process, but rather an encoding process, in which tokens are being separated and parsed into specific values, which are then processed by the CPU as binary signs.
Still, this abstraction layer offered by programming languages allowed us to focus on what we want to do, rather than on how to do it. The metaphorical aspect comes in when the issue of interpretation arises, as the possibility to deal with more complex concepts required us to grasp them in a non-rigorous way, one which would have a one-to-one mapping between concepts. Allen Newell and Herbert A. Simon, in their 1975 Turing Award lecture, offer a good example of symbolic manipulation relates inherently to understanding and interpretation:
In none of [Turing and Church's] systems is there, on the surface, a concept of the symbol as something that designates .
The complement to what he calls the work of Turing and Church as automatic formal symbol manipulation is to be completed by this process of interpretation , which they define simply as the ability of a system to designate an expression and to execute it. We encounter here one of the essential qualities of programming languages: the ambivalence of the term interpretation . A machine interpretation is clearly different from a human interpretation: in fact, most people understand binary as the system comprised of two numbers, 0 and 1, when really it is intepreted by the computer as a system of two distinct signs (red and blue, Alex and Max, hot and cold, etc.). To assist in the process of human interpretation, metaphors have played a part in helping programmers construct useful mental representations related to computing. Keywords such as loop
, or fork
are all metaphorical denomations for computing processes.
These metaphors can go both ways: helping humans understand computing concepts, and to a certain extent, helping computers understand human concepts. This reverse process, using metaphors to represent concepts to the computer, something we touched upon inModelling complexity
, brings forth issues of conceptual representation through formal symbolic means. The work of early artifical intelligence researchers consisted not just in making machines perform intelligent tasks, but also implies that intelligence itself should be clearly and inambiguously represented. The work of Terry Winograd, for instance, was concerned with language processing—that is,intepretation and generation. Through his inquiry, he touches on the different ways to represent the concept of language in machine-operational terms, and highlights two possible represenations which would allow a computer to interact meaningfully with language (
Language As a Cognitive Process: Syntax by Terry Winograd, 1982.
. He considers a procedural representation of language, one which is based on algorithms and rules to follow in order generate an accurate linguistic model, and a declarative representation of language, which relies on data structures which are then populated in order to create valid sentences. At the beginning of his exposé, he introduces the historically successive metaphors which we have used to build an accurate mental representation of language (language as law, language as biology, language as chemistry, language as mathematics). As such, we also try to present language in other terms than itself in order to make it actionable within a computing environment, in a mutually informing movement.
Metaphors are used as cognitive tools in order to facilitate the construction of mental models of software systems. The implication of spatial and visual components in mental models already highlighted by Lakoff and Johnson, and pointed out through the psychology experiments on programmers allow us to turn to metaphors as an architecture of thought (
Cathedrals in the Mind: The Architecture of Metaphor in Understanding Learning by Kathleen Forsythe, 1986.
. Metaphors operate cognitively, Lakoff and Johnson argue, because of the embodiment which underpins every individual's perception. Therefore, such a use of metaphors points to the spatial nature of the target domain, something already suggested by the concept of mapping inThe psychology of programming
. Complementing the semantic structure of metaphor, we now turn to another conception of space in program texts: the syntactic structure of source code, upon which another kind of tools can operate.
Tools as a cognitive extension
Metaphors make use of their semantic properties in order to allow users to build an effective mental model of what the system is or does; as the result, they allow programmers to build up hypotheses and take epistemic actions to see whether their mental model behaves as expected. Some of the keywords of programming languages are thus metaphorical. However, one can also make use of the syntactical properties of source code in order to facilitate understanding differently. We see here how these tools take part in a process of extended cognition.
how interfaces decide on the way the abstract entities are represented, delimited and accessed. They can nonetheless also go beyond representation in order to alleviate cognitive load through technical affordances, by providing as direct access as possible to the underlying abstract entities represented in source code's structure.
Looking at it from the end-user's perspective, there is software which focuses on knowledge acquisition through direct manipulation. For instance, Ken Perlin's Chalktalk focuses on freehand input creation and programmatic input modification in order to explore properties and relations of mathematical objects (e.g. geometrical shapes, vectors, matrices) (
, while Brett Victor's Tangled focuses in a very sparse textual representation of a dynamic numerical model. The epistemic actions taken within this system thus consists in manipulating the numbers presented in the text result in the modification of the text based on these numbers (
Victor, 2011Victor, 2011)
Explorable Explanations by Bret Victor, 2011. [link]
For programmers, the kind of dedicated tool used to deal with source code is called Integrated Development Environment (IDE). With a specific set of features developing over time, and catered to the needs and practices of programmers, IDEs cover multiple features to support software writing, reading, versioning and executing—operations which go beyond the simple reading of text (
Evaluation of integrated software development environments: Challenges and results from three empirical studies by Rex Bryan Kline, Ahmed Seffah, 2005.
One of the first interfaces for writing computer code included the text editor called EMACS (an acronym for Editor MACroS ), with a first version released in1976. Containing tens of thousands of commands to be input by the programmer at the surface-level in order to affect the deeper level of the computing system, EMACS allows for remote access of files, modeful and non-linear editing, as well as buffer-based manipulation Vim (
Multics Emacs History/Design/Implementation by Bernard S. Greenberg, 1996. [link]
. This kind of text editor acts as an interfacing system which allows for the almost real-time manipulation of digitized textual objects.
While software such as EMACS and Vim are mostly focused on productivity of generic text-editing, other environments such as Turbo Pascal or Maestro I focused specifically on software development tasks in a particular programming language in software such as the Apple WorkShop (1985) (
Macintosh Programmers Workshop by Joel West, 1987.
, or the Squeak system for the Smallktalk programming language (
Back to the future: The story of Squeak, a practical Smalltalk written in itself by Dan Ingalls, Ted Kaehler, John Maloney, Scott Wallace, Alan Kay, 1997.
. These tools take into account the particular attributes of software to integrate the tasks of development (such as linking, compiling, debugging, block editing and refactoring) into one software, allowing the programmer to switch seamlessly from one task to another, or allowing a task to run in parallel to another task (e.g. indexing and editing). Kline and Seffah state the goals of such IDEs: " Such environments should (1) reduce the cognitive load on the developer; (2) free the developer to concentrate on the creative aspects of the process; (3) reduce any administrative load associated with applying a programming method manually; and (4) make the development process more systematic. " (
Evaluation of integrated software development environments: Challenges and results from three empirical studies by Rex Bryan Kline, Ahmed Seffah, 2005.
One of the ways that IDEs started to achieve these goals was by developing more elaborated user-interfaces, involving more traditional concepts of aesthetics (such as shape, color, balance, distance, symmetry). At the surface level, concerned only with the source code's representation, and not with its manipulation. Indeed, since the advent of these IDEs, studies have demonstrated the impact that such formal arrangement has on program comprehension (
Oman, 1990Oliveira, 2022)
Typographic style is more than cosmetic by Paul W. Oman, Curtis R. Cook, 1990.
A Systematic Literature Review on the Impact of Formatting Elements on Program Understandability by Delano Oliveira, Reydne Bruno, Fernanda Madeiral, Hidehiko Masuhara, F. C. Filho, 2022.
. Spacing, alignment, syntax highlighting and casing are all parameters which have an impact on the readability, and therefore understandability of code, sometimes to the extent that the formatting capabilities of the tool influences such understandability through particular formal configurations96.
Understanding the source code is impacted both by legibility (concerning syntax, and whether you can quickly visually scan the text and determine the main parts of the text, from blocks to words themselves) and readability (concerning semantics, whether you know the meaning of the words, and their role in the group) (
Oliveira, 2020Jacques, 2015)
Evaluating Code Readability and Legibility: An Examination of Human-centric Studies by Delano Oliveira, Reydne Bruno, Fernanda Madeiral, F. C. Filho, 2020.
Understanding the effects of code presentation by Jason T. Jacques, Per Ola Kristensson, 2015.
- Example of a program text without syntax highlighting nor machine-enforced indentation. See for a functional equivalent, formatted.
IDEs therefore solve some of the mental operations performed by programmers when they engage with source code, such as representing code blocks through proper indentations. The automation of tooling and workflow increased in software such as Eclipse, IntelliJ, NetBeans, WebStorm Visual Studio Code97has led to further entanglements of technology and appearance. By organizing and revealing code space through actions such as self documentation, folding code blocks, finding function declarations, batch reformatting and debug execution, they facilitate cognitive operations such as chunking, tracing, or highlighting beacons (
Code bubbles: Rethinking the user interface paradigm of integrated development environments by Andrew Bragdon, Steven P. Reiss, Robert Zeleznik, Suman Karumuri, William Cheung, Joshua Kaplan, Christopher Coleman, Ferdi Adeputra, Joseph J. LaViola, 2010.
. These technical features show how a tool which operate at primarily the aesthetic level has consequences on the understandability of the system represented, even though this is, again, dependent on the skill level of the programmer (
Supporting comprehension of unfamiliar programs by modeling cues by Naveen Kulkarni, Vasudeva Varma, 2017.
A significant dimension in which source code is being automatically formatted is the use of styleguides. The evolution of software engineering, from the individual programmer implementing ad hoc and personal solutions to a group of programmers coordinating across time and space to build and maintain large, distributed pieces of software, brought the necessity to harmonize and standardize how code is written—style guides started to be published to normalize the visual aspect of source code. These, called linters , are programs which analyzes the source code being written in order to flag suspicious writing (which could either be suspicious from a functional perspective, or from a stylistic perspective). They act as a sort of intermediary object , insofar as they assist individuals in the process of creating another object (
Les objets intermédiaires dans la conception. Éléments pour une sociologie des processus de conception by Alain Jeantet, 1998.
. Making use of formal syntax, IDEs' automatic styling of contributes to collective sense-making, something that we discuss further inStyles and idioms in programming
This move from legibility (clear syntax) to readability (clear semantics) enables a cetain kind of fluency , the process of building mental structures that disappear in the interpretation of the representations. The letters and words of a sentence are experienced as meaning rather than markings, the tennis racquet or keyboard becomes an extension of one's body, and so forth. Well-functioning interfaces are thus interfaces which disappear from the cognitive process of their user, allowing them to focus on ends, rather than on means (
The Interface Effect by Alexander R. Galloway, 2012.
, leading to what Paul A. Fishwick has coined aesthetic programming , an approach of how attention paid to the representation of code in sensory ways results in better grasping of the metaphors at play in code. Ultimately, by enabling different modes of representating the various processes and states that constitute computation, interfaces enable the navigation of information space98.
Finally, IDEs also enable epistemic action, not just through representation but also through interaction. For instance, IDEs include debuggers, very specialized developer tools which enable the step by step execution of each line of code, thus understandable at human time, rather than at machine time. By slowing down the execution of the CPU, the debugger also suggests a different representation of the program-text at runtime: one of landscape. The debugger's interface extends the metaphor of the step in a further spatial manner99and as such hints at the program text as a spatial environment which can be explored in multiple dimensions.
Therefore, automatic tools operate at the surface-level but also with consequences at the deep-level, helping visualize and navigate the structure of a program text. In this case, we witness how computer-aided software engineering in the form of IDEs can be considered as a cognitive tool, a combination of surface representation affording direct interaction interface, whose formal arrangements and affordances facilitate direct engagement with the conceptual structures underlying in a program text. Perception and comprehension of source code is therefore more and more entangled with its automated representation.
The roots of computer-enabled knowledge management can be found in the work of the encyclopedists, and scientists in seventeenth-century europe, as they approached knowledge as something which could, and should be rationalized, organized and classified in order to be retrievable, comparable, and actionable (
The Software Arts by Warren Sack, 2019.
. Scholars such as Roland Barthes, Jacques Derrida or Umberto Eco had specific knowledge-management techniques in order to let them focus on the arguments and ideas at hand, rather than on smaller organizational details, through the use of index cards; whether paper or digital, technology itself is a prosthesis for memory, an external storage which offloads the cognitive burden of having to remember things (
The card index as creativity machine by R. Wilken, 2010.
Laying out his vision for a Man-Computer Symbiosis , J.C.R. Licklider, project leader of what would become the Internet and trained psychologist, emphasized information management. He saw the computer as a means to " augment the human intellect by freeing it from mundane tasks " (
Man-Computer Symbiosis by J. C. R. Licklider, 1960.
. By being able to delegate such mundane tasks, such as manually copying numbers from one document to another, one could therefore focus on the most cognition-intensive tasks at hand. While improving input, speed and memory of contemporary hardware has supported Licklider's perspective a single limitation that he pointed out in the 1950s nonetheless remains: the problem of language.
What we want to accomplish, and how do we want to accomplish it, are complex questions for a computer to process. The subtleties of language imply some ambiguities which are not the preferred mode of working of a logical arithmetic machine. If machines can help us think, there are however some aspects of that thinking which cannot cannot easily be translated in the computer's native, formal terms, and the work of interface designers and tool constructors has therefore attempted to automate most of what can be automated away, and faciltate the more mundane tasks done a by a programmer. Software tools are therefore used to think and explore concepts, by supporting epistemic actions in various modalities (
Humane representation of thought: A trail map for the 21st century by Bret Victor, 2014.
The computer therefore supports epistemic actions through its use of metaphors (to establish a fundamental base of knowledge) and of actions (to probe and refine the validity of those metaphors) to build a mental model of the problem domain. In the case of IDEs, the problem domain is the source code, and these interfaces, by allowing means of scanning and navigating the source code, are part of what Simon Penny calls, after Clark and Chalmers, extended cognition (
Making Sense: Cognition, Computing, Art and Embodiment by Simon Penny, 2019.
. Extended cognition posits that our thinking happens not only in our brains, but is also located in the tools we use to investigate reality and to deduce a conceptual model of this reality based on empirical results. We consider IDEs a specific manifestation of embodied cognition, actively helping the programmer to define, reason about, and explore a code base. The means of taking epistemic action, then, are also factors in contributing to our understanding of the program text at hand. In this spirit, David Rokeby goes as far as qualifying the computer as a prosthetic organ for philosophy , insofar as it helps him formulate accurate mental models as he interacts with them through computer interfaces, compensating for its formal limitations100.
This brings us back to our discussion of Simondon's technical and aesthetic modes of existenceSoftware ontology
. As highlighted by the use of software tools in the sense-making process of a program text, formal syntax only operates on distinct, fragmented concepts, as evoked in the technological mode101. In turn, the aesthetic mode, expressed through the more systemic and totalling approach of metaphors and of sensual perception, can compensate this fragmenting process. This does suggest that the cognitive process of understanding technical artifacts, such as source code, necessitates complementary technical and aesthetic modes of perception.
Programmers face the complexity of software on a daily basis, and therefore use specific cognitive tools to help them. While our overall argument here is that aesthetics is one of those cognitive tools, we focused on this section on two different, yet widely used kinds: the metaphor and the integrated development environment.
We pointed out the role that metaphors play in creating connections between pre-existing knowledge and current knowledge, building connections between both in order to facilitate the construction of mental models of the target domain. Metaphors are used by programmers at a different level, helping them grasp concepts (e.g. memory, objects, package) without having to bother with details. As we will see in the following chapters (seeLiterary metaphors
), metaphors are also used by programmers in the source code they write in order to elicit this ease of comprehension for their readers.
Programmers also rely on specific software tools, in order to facilitate the scanning and exploring of source code files, while running mundane tasks which should not require particular programmer attention, such as linking or refactoring. The use of software to understand software is indeed paradoxical, but nonetheless participates in extended cognition; the means which we use to reason about problems affect, to a certain extent, the quality of this reasoning.
- - -
Code is therefore technical and social, and material and symbolic simultaneously. Rather, code needs to be approached in its multiplicity, that is, as a literature, a mechanism, a spatial form (organization), and as a repository of social norms, values, patterns and processes. (
The Philosophy of Software: Code and Mediation in the Digital Age by David M. Berry, 2011.
This chapter has shown that software is a complex object, an abstract artifact , existing at multiple levels, and in multiple dimensions. Programmers therefore need to deal with this complexity and deploy multiple techniques to do so. Psychology studies, investigating how programmers think, have pointed out several interesting findings. First, building mental models from reading and understanding source code is not an activity which relies exclusively on the part of the brain which reads natural language, nor on the part which does mathematical operations. Second, the reasoning style is multimodal, yet spatial, involving layered abstractionsl; programmers report working and thinking at multiple levels of scale, represent parts of code as existing closer or further from one another, in non-linear space. Third, the form affects the content. That is, the way that code is spatially and typographically laid out helps, to a certain, with the understanding of said code, without affecting expertise levels, or guaranteeing success.
In order to deal with this complexity, some of the means deployed to understand and grasp computers and computational processes are both linguistic and technical. Linguistic, because computer usage is riddled with metaphors which facilitate the grasping of what the presented entities are and do. These metaphors do not only focus on the end-users, but are also used by programmers themselves. Technical, because the writing and reading of code has relied historically more and more on tools, such as programming languages and IDEs, which allows programmers to perform seamless tasks specific to source code.
In the next chapter, we pursue our inquiry of the means of understanding, moving away from software, and focusing on how the aesthetic domains examined inIdeals of beauty
. This will allow us to show how source code aesthetics, as highlighted by the metaphorical domains that refer to it, have the function of making the imperceptible understandable.