“The shimmering idea that the world is composed not of given forms on a fixed stage but of an atomic field of flux and churn is ancient. The idea precedes our ability to mathematize the hypothesis experimentally and predates by millennia the engineering of machines that can simulate a calculation of discrete bits of information as if they are those atoms. Close your eyes and visualize dust motes floating and falling in the white light of a projector. See them just barely touch or miss one another. This swirling and tumbling through the void is also, given some poetic license, one model of elemental computation. Around 1 A.D., Lucretius called these atomic bits primordia or seminarerum, and for the Epicurean philosophical tradition, this flux is ontological and the basis of their own information theory avant la lettre. It says that what seems to be naive observation as solid figures and grounds, withdrawn into themselves and oriented as objects, are but clusters of bits that have fallen into one another over time, and will in more time fall apart and again into other things, conjugating or calculating themselves again and again. The name for the force of collision that causes their downward arcs to tumble into assemblage is translated from the Latin as swerve. Atomic bits swerve, as if by accident, and in their accumulation, the entropy of the noisy void gives way to the negentropic formulation of the world and its temporal orderliness: from this calculation, forms form. Lucretius called this economy of entanglement between atoms, located by their fluid communication in flight, the clinamen, and it has been the source of considerable philosophical and literary rumination (including Marx’s doctoral dissertation).
Today, enjoying a vantage point that includes contemporary atomic physics, we see the clinamen less as a spontaneous lurch of some thing from its track (the universe as the eventual archive of these accumulated deviations) than as interlocking fields of stochastic probabilities structuring emergent order in this way or that. The details evolve, but the idea of calculative emergence persists. The basic innovations are well known. In twelfth century Majorca, Ramon Llull described logical machines, influencing Gottfried Leibniz, who developed a predictive calculus and a biliteral alphabet that, drawing on the I Ching, allowed for the formal reduction of any complex symbolic expression to a sequence of discrete binary states (zero and one, on and off). Later, the formalization of logic within the philosophy mathematics (from Pierre-Simon Laplace, to Gottlob Frege, Georg Cantor, David Hilbert, and so many others) helped to introduce, inform, and ultimately disprove a version of the Enlightenment as the expression of universal deterministic processes (of both thought and physics). In 1936, with his now-famous paper, “On Computable Numbers, with an Application to the Entscheidungsproblem,” a very young Alan Turing at once introduced the theoretical basis of modern computing and demonstrated the limits of what could and could not ever be calculated and computed by a universal technology. Turing envisioned his famous “machine” according to the tools of his time to involve an infinite amount of “tape” divided into cells that can store symbols, moved along a stationary read-write “head” that can alter those symbols, a “state register” that can map the current arrangement of symbols along the tape, and a “table” of instructions that tells the machine to rewrite or erase the symbol and to move the “head,” assuming a new state for the “register” to map. The Church-Turing thesis (developed through the 1940s and 1950s) would demonstrate that Turing’s “machine” not only could simulate algorithms, but that a universal Turing machine, containing all possible such machines, could, in theory, calculate all logical problems that are in fact computable (a limit that Turing’s paper sought to identify). The philosophical implications are thorny and paradoxical. At the same moment that Turing demonstrates the mechanical basis for synthetic logic by machines (suggesting real artificial intelligence), he partially delinks the correlation between philosophical thought and machinic calculation. The implications continue to play out in contemporary debates from robotics to neuroscience to the philosophy of physics, as has Turing’s later conceptualization of “thinking machines,” verified by their ability to convincingly simulate the performance of human-to-human interaction, the so-called Turing test. In the decades since Turing’s logic machine, computation-in-theory became computers-in-practice, and the digitalization of formal systems into mechanical systems and then back again, has become a predominant economic imperative. Through several interlocking modernities, the calculation of discrete states of flux and form would become more than a way to describe matter and change in the abstract, but also a set of standard techniques to strategically refashion them as well. Computability moves from a universal logic to a generic technology (and so contemporary claims that this passage is reversible are both predictable and problematic). Although the twentieth century invented computers, it did not invent computation so much as it discovered it as a general force, and offered some initial basic tools to work with it more directly. We are, like everything else, also its product.
This conceptual shift is important to how we hope to consider reforming The Stack. One of Turing’s signal achievements is to show that an artificial “machine” could approach, and even approximate, the scope of natural computation, as defined in a particular way. His innovation was the specific pairing of formal logic with industrial technology that was, even after Charles Babbage and Ada Lovelace’s Victorian-era calculating machines, by no means obvious in its implications. For measuring the significance of that pairing in relation to The Stack, it is important to distinguish the limits of formal computation, on the one hand, from what the limits of actual computational technologies can really do, on the other. These are two very different kinds of limits. While Turing’s hypothetical machine demonstrated the mathematical limits of formal computability, it also demonstrated that any problems that could be captured and expressed symbolically through a reduction to rational integers (which likely describes the vast plurality of things and events in the world as representable by intelligent creatures) could be simulated and solved by a machine engineered to do so, given enough time, materials, and energy. Anything expressed as computable information, regardless of the natural appearance, linguistic identity, or economic value, could be processed by a universal information machine programmed to do so and physically capable of running through enough operations. A strong computationalist philosophical position may also extrapolate from this that natural systems can be (and so must be) reducible to information and computational processes. Problems arise when the notion that things are formally equivalent by their shared computability slides into the claim that they are therefore ontologically equivalent, or even culturally and economically equivalent. The questions raised by the idea of universally calculable matter are interesting on both practical and philosophical terms, but I raise them here to provide conceptual context for other questions.”
— Benjamin H. Bratton, “The Stack: On Software and Sovereignty”, 77-79
[BB] …States have citizens, markets have consumers, and platforms have users. And what the ethical responsibility to and from users and platforms is, is something we don't have a good language for.
[KK] Would you say that the market and the state operate as platforms?
[BB] They can, but not necessarily. There are ways in which markets operate in different kinds of things that are not platforms, states as well. What I'm trying to point at is that they do it in certain ways, and this deserves our attention. It would seem that the interface is very much the place where the politics of a platform are shaped. The interface structures structuring the potentiality of the platform, towards the platform itself, and towards the user.
Let's take the GUI [graphical user interface] as a generic starting point. In order for any interface to be useful, to be operational, it has to be reduced into a set of things that allow participation to be at the scale of the gesture, as opposed to conceptualising the whole. This is inevitably a kind of ideological reduction of what those possibilities are. The extent to which we become ‘culturated’ into a particular space of that reduction is what I call an interfacial regime. There may be multiple interfacial regimes that we come in contact with over the course of the day, each of which is describing the rest of the whole to us in a particular way. They have a narrative logic, they have a value proposition, but they are also tools by which those logics are instantiated. It’s a value system and when you use it, it materially reinforces itself. But the interface is also the way in which the rest of the Stack sees the user.
[KK] In your Stack model, the user can be human, but doesn’t have to be. What is the user in the Stack?
[BB] When we say user-centred design, we focus on human users. Now, anything that can initiate what I call a column, that can activate all the layers of the Stack, can be a user. It is worth thinking about the layers as actually working sequentially, when one message is sent from one user to another user. The user sends a message to the interface layer, the address layer, all the way down to the earth layer, and then to the other user back up through all the layers of the Stack. It is important to keep in mind that in the simple movement of one message of a user to a user, the entire apparatus is invoked, the whole at once. But anything that initiates these columns is a user. That could be a person, but it could also be a high-speed trading algorithm, it could be an animal, vegetable, mineral, driverless car, whatever you want it to be. We see most of the traffic on the Internet is already non-human. This co-participation within the space of the Stack, with other machinic, animalian, algorithmic co-inhabitants will be one of the more difficult philosophical challenges for us to deal with. This is not radical cosmopolitanism, or the deeply universal suffrage in the Latourian sense of a Parliament of Things, because it's not a parliament, it's a machine. It is not a philosophical recognition; it's a mechanical co-participation. It's not about transference of sovereign will through mechanisms of representation by which some sort of transparent majority outcome is what steers it as a whole - that is not the mechanism. But to deal with the highly contingent status of the human user towards other users will be a big problem. I actually foresee a whole range of different varieties of humanist fundamentalism over the course of the next decade, pushing back against this.
[KK] This is one way in which understanding the platform as a third kind of organisational principle next to the market and next to the state becomes very urgent because otherwise we can only see users as either consumers or citizens and they could be very different things as well.
[BB] Exactly. And the kind of discourses we have about the status of the user, from Wikileaks etc., are trying to counter-weaponise the atomic, anomic individual in relationship to the state apparatus. But I think the potentiality for the user as a subject position is not about the individual withdrawal from this thing and a kind of Second Amendment weaponisation through encryption and privacy, but rather through the multiplication and pluralisation of user positions.
One of the real problems of interface design is that is supposes that an interface is used by one user at a time. It individuates and interpolates people as individual users, but in fact we are collaborating through these mechanisms in different ways and if we could understand how it is that we would design these interfaces for these distributed collaborative user positions we would have something going there. The example I use is proxy-users. When you have a proxy user system, the user is in one location, but as far as the interface is concerned it thinks it is coming from somewhere else. The user is not identical to the person. You could also have two users that are actually one person. It almost doesn't matter at one point whether you know which composite user you might be participating in at a particular time, any more than it is necessary for the bacteria in your body to know what your driver’s license number is.
The question of how we will even define the parameters and delineate the forms of the different kinds of user positions that any of us move in and out of over the course of the day, is interesting. In other words: when we can conceptually separate the idea of the agency and political rights responsibilities of the user from those of the individual human organism, and no longer understand these as isomorphically mapping on one another, we can have a much better conversation. Not because that would be “good”, but because it would help us to get a handle on where we are right now, at the very least.
— Benjamin H. Bratton in an interview with Klaas Kuitenbrouwer, Garden of Machines
“In architecture it's very hard to find any reason to go beyond the standard solution. And the standard solution becomes one because it's such a good solution to what you're trying to do. But the problem of it is that it's often looking at a single criteria. And a typical standard solution in architecture is a very Bob Moses New York public housing. You know they need X amount of units, they need east/west exposure, they need a minimum distance between them, there's a certain height that's good for the elevator run and the amount of fire stairs... But it says nothing about diversity of household types, programmatic diversity to create a lively neighborhood, the life between the buildings or the prevailing winds. There are so many other factors you can take into consideration. And I think the secret recipe that we have developed that allows us to go beyond the standard solution is that we don't try to just provide X amount of real estate within a certain density, we actually try to pile on more demands. We also need to create a nice social space at the heart of the city block, we also need to ensure sunlight exposure, outdoor spaces, all kinds of things. And as you pile on these demands, suddenly standard solution doesn't work any longer and you force the architecture into something different. […] By piling on more demands, by making architectural problem more difficult to solve we escape the straightjacket of the standard solution and we come up with something that answers a more difficult problem.”
— Bjarke Ingels in an interview with Michael Kimmelman, The New York Times
1. The brief / The state of design
“A renewed Copernican turn is needed everywhere, including in the philosophy of design. There it begins with the unsettling implications of our century’s circumstances, technologies, and deadlines. In practice, it shifts the balance from experiences to outcomes, from users to systems, from aesthetics to access, from intuition to abstraction, from expedience to ideals. The direct implications for design are fundamental, but habits are hard to change. From the Vitruvian Man to Facebook profiles, centuries of “human-centered design” (HCD) have brought more usable tools, but in many important domains design is far too psychologizing, individuating, and anthropocentric without being nearly humane enough. When raised to a universal principle, HCD also brought landfills of consumer goods, social media sophistry, and an inability to articulate futures beyond narrow clichés. In the name of amplifying the individual’s fertile desires, we’ve made a desert. Maximizing usability came at the expense of a deeper reason. The Copernican shift in the philosophy of design includes a rotation away from human-centered design and toward a fuller understanding of designing the human and the world. I don’t mean this in some transhumanist sort of way, but rather that the design of physical media is more than composing augmentations of a given subject, agent, and form. In Beatriz Colomina and Mark Wigley’s concise archaeology of design’s history, the practice is always ultimately about designing the human itself through designing its various exoskeletons, afterimages and anaesthetics.”
— Benjamin H. Bratton, “The Terraforming”
2. The response, Part 1: Ideation / “Form follows fiction” (Bernard Tschumi)
“Do you know Philip K. Dick’s definition of science fiction?” he asks. “He says science fiction is not a space opera, although it often happens in space, and it’s not a story from the future, although it often happens in the future. He says science fiction is a story where the plot is triggered by some form of innovation. Often it’s technological innovation, but it can be political, social, whatever. And the story is a narrative exploration of the potential of that innovation, of that idea. And not only the writer but the reader can actually think along and imagine how would our world be different if this one thing is different.
So you can say that science fiction is the medium where you do that in a narrative way,” Ingels continues. “But architecture is a discipline where you have the possibility to actually do it. All of Hell’s Kitchen is the way it is. The whole world is the way it is. And we do this one thing different.” Ingels raises his hand with a conductor’s flourish. “And what are the consequences for everything around us?”
— Bjarke Ingels in an interview with Mark Binelli, Rolling Stone
3. The response, Part 2: Implementation / “They look different because they perform differently” (Bjarke Ingels)
3.1. “Identify the change that is happening, or has happened recently, within the field of a particular project. It can be in the neighborhood where the project is, it can be in the building code. But it can also be in the behavior of the program that’s going to go into that building, it can be in the technology that’s used to build the building. […] Once you’ve identified the change, each project has to address the consequences, conflicts, problems and potentials that arise from this change.”
3.2. “Give a gift. Since we neither have the money nor the political power to make things happen [as architects], all we have is the power of interpretation. Then we can take all of the necessity that comes from the client, and we can take all of the rules and regulations that come from the city — there’s also a climate we have to respond to and all these other things. We can try to respond to that in a way that is not just answering all of the questions we’ve been given, but also putting something else forward in addition. We call this a gift. […] It can be provoked by the identification of the change, it can be that the change actually allows us to do something that would have previously been unimaginable or impossible.”
3.3. “Request that each project has to insert itself into the future of one of the six fields: thinking, sensing, making, moving, feeding and healing. By doing this, we ensure that we start generating some new knowledge, start opening some new doors, start actually creating new avenues of exploration. One of the things that I find revitalizing about our practice [at BIG] is that once in a while I feel that we have exhausted possibilities, and I start feeling ”Ugh, something new must happen.” And then we stumble upon one change, or we discover one new idea that opens up a door, and suddenly, through that door is an incredible unexplored landscape of possibility that somehow can feed us for many, many projects.”
— Bjarke Ingels in an interview with Andrew Zuckerman, Time Sensitive