3D Graph Convolutional Neural Networks in Architecture Design

Dr. Matias del Campo, Alexandra Carlson, Dr. Sandra Manninger

1 INTRODUCTION

This research started with a simple question: How can we train an algorithm, specifically some type of neural network, to understand and replicate the inherent sensibility of an architect? Of course, this question produces an entire plethora of follow up questions such as the quantifiable properties of sensibility, the methodology of data harvesting, the nature of labeling and so on. However, the primary intention in this line of inquiry is the interrogation of architectural design, namely the sensibilities encoded in the design process,  through its inherently 3-dimensional nature – which rarely can be entirely captured through many machine learning and vision algorithms, specifically the popular image-based Convolutional Neural Networks (CNNs).

1 Closeup of one of the resulting models particularly geared towards the aesthetics of Czech Cubism.

This problem of how to characterize and define the nature of the architectural design process can be described (among others) along the lines of the following representational devices: the plan, and the model. Plans can be considered one of the oldest methods to represent spatial and aesthetic information in an abstract, 2D way. With a history of more than 4000 years the discipline has created a repository of architectural solutions that can be mined for solutions of contemporary architecture problems. However, to be used in the design process of 3D architectural solutions, these representations are inherently limited by the loss of rich information that occurs when compressing the 3D world into a two dimensional representation.
During the first Digital Turn1,  the sheer amount and availability  of models increased dramatically, as it became viable to create vast amounts of model variations to explore project alternatives varying along different physical and creative dimensions.  3D models offer what the design object appears in real life, and can include a wider array of object information that is more easily understandable by  non experts, as exemplified in techniques such as Building Information Modeling and Parametric Modeling.
Therefore, the ground condition of this paper considers that architecture’s inherent nature lies in the negotiation of space, the organization of voids, the assemblage3 of spatial components resulting in spatial sequences –  based on programmatic relationships.  All of these conditions constitute objects representing a material culture (the built environment) embedded in a symbolic and aesthetic culture2 that is created by the designer and captures his/her sensibilities.  In fact we could take cues from Manuel de Landa’s thoughts on the nature of the Assemblage, which in itself possesses various of the traits described before – and which ultimately result in a set of social ontologies3. We will take the argument of architecture as a method of organizing physical space through the assemblage of matter as the guiding principle of this paper, intentionally leaving out aspects of ephemeral spatial conditions created through software, social networks, the cloud  and other digital media, in order to create a rigorous framework of discussion.

Now that this definition is established, we can continue interrogating the design abilities of Neural Networks that questions the world through the lens of three dimensional models.

In this respect, the motivation of this paper is to explore machine learning methods that allow  to interrogate large datasets of architectural archetypes based solely on 3D models, instead of using two dimensional representations, such as image databases4. In this paper we strive to lay out a new and exciting framework for architecture design based upon an alternative way of representing 3D objects. Instead of representing a 3D object as a pixel array/2D image, we propose to use a much more flexible dataset structure called a graph. In particular, we propose to use a generalization of the standard CNN, called Graph Convolutional Neural Network5 (GCNN), in order to model and perform the entirety of the design process using only 3D models and information.
The central aim of this paper is to provide a possible solution to the question: How can a Neural Network interrogate the inherent sensibility of a specific designer? In our solution, we assume that we can model a designer’s sensibility as a high dimensional function that can be learned and applied to generate novel design solutions.


This paper presents a design technique, whose backbone is GCNNs, that is capable of modeling both the perception and creation of architectural objects. Key to this technique is to train a GCNN to capture the aesthetic sensibilities of a specific designer based on neuroaesthetic labels such as basic visual features (e.g., coloration, geometric features like scale, semantic properties), aesthetic quality, and functional quality. This paper lays out a method that collects 3D models from the hand of one designer, creating a large database of models in two distinct categories: houses and columns – in order to train a Neural Network to come up with additional model solutions that are generated using the learned features of the trained network

2 BACKGROUND and DEFINITIONS:

In order to create a clear frame for the conversation laid out in this paper, specific boundaries in terms of the used definitions have to be made. In the following we attempt to specify the definitions used in this paper, as they pertain to Artificial intelligence and Architecture.

2.1 Aesthetics:

The term Aesthetics is a highly charged one in the architecture discourse. This includes considerations such as the antithetical position of Peter Eisenman, who demanded that figures of architecture should be read rhetorically instead of aesthetically or metaphorically6 or Anthony Vidler’s discussion of aesthetics along the lines of its functional and metaphysical properties such as the sublime7 . The authors would propose first to interrogate the etymology of the word aesthetics. Greek in its origin (aisthēsis) the term aesthetics literally means perception by the senses. The German philosopher Alexander Gottlieb Baumgarten considered art to be the perfection of sensory awarenessIt is important to understand that the term aesthetics has undergone a series of mutations since Baumgartner’s time, and that even contemporaries of Baumgarten, such as Immanuel Kant, had a very critical position towards Baumgarten’s definition as it lacked -according to Kant- any objective rules, laws or any principles for that matter, able to describe natural or artistic beautyIn contrast to the objective usage of the term Aesthetics, Kant relied on its use in a subjective manner as it relates to the internal feeling and not to any qualities in an external object. The assertion of Baumgartnen that the three guiding principles of aesthetics are “Good, Truth and Beauty” appear essentially naive in the contemporary age10 Thus, it is surprising for the authors to find this branch of conversation on aesthetics still being perpetuated by critics such as Roger Scruton11 and Yael Reisner.12 In general it could be stated that the meaning of the term Aesthetic has become equivocal to connote something whose appearance display a particular set of qualities that evoke a response in the observer – in its original philosophical meaning Aesthetics has turned into a technical discipline of philosophy associated with all its aspects of definition and ontology. Interrogating the full extent of the meaning of this criticism for contemporary architecture, especially in the light of Neural Architecture, would far extend the length of this paper, thus the authors apologize for the brevity of this definition.

2.2 Sensibility:

When the authors utilize the term sensibility, it is specifically geared towards aspects of artistic sensibility. In the previous section we outlined a historical and theoretical development along the lines of Baumgartner’s and Kant’s definitions of Aesthetics, which concluded in the insight that aesthetics is, at its very core, a theory of sensibility – evoking a response in the observer. This assertion discusses the arts of the past as much as the arts of the present, recognizing the aesthetic value as a specific feature of all experience. Or as Arnold Berleant put it: Such a generalized aesthetic enables us to recognize the presence of a pervasive aesthetic aspect in every experience, whether uplifting or demeaning, exalting or brutal. It makes the constant expansion of the range of architectural and of aesthetic experience both plausible and comprehensible13. What we mean by  sensibility is the perceptual awareness developed and guided through training and exercise. To this extent it is certainly more than simple sensual perception, and closer to something like a guided, or educated sensation. An education that has to be continuously fostered, polished and extended, through encounters and activities, in order to maintain the ability to execute tasks with an aesthetic sensibility. This ability is attributed in the Western traditions primarily to the arts -to painting, sculpture, music, literature and so on – with architecture being this strange animal living somewhere between engineering and the arts.

2.3 Agency:

In the Western philosophical tradition causal chains do not produce our choices, as it would be the case with objects responding to natural forces, thus giving us Agency. Free will and Agency are closely related, but not identical. They share traits, in that Agency is undetermined but significantly free. In contrast to inanimate objects, humans can make decisions, and enforce them on to the world – for the moment we will leave out the metaphysical question as to how humans make decisions. In any case it entails moments of moral agency, as particular acts of human agency need thought and consideration about the outcomes. For this paper, we rely on agency as part of the debate on action theory14. This can be exemplified for example with the philosophical traditions established by Hegel15 and Marx16 that considers human agency in a collective fashion – thus for our frame of thinking we consider the relation between human and Neural Network as such a collective with a particular agency. In this extent Realist17 and Materialist18 considerations collide with aspects of determinism19 and indeterminacy20.

2.4 Authorship:

Relevant for the frame of conversation in this paper are the positions of Roland Barthes and Michel Foucault in regards to the nature of Authorship and Author at large. These critics interrogated the role and relevance of authorship pertaining to the interpretation or meaning of text – this can be expanded to all areas of artistic production, as for example to architecture in the present text of this paper. Barthes for example attributed meaning to the language and not the author of the text. Instead of relying on the legal authority to exude authorship21, Barthes assigns authority to the words and language itself. Foucault’s critical position vs the author can be found in the argument he presents in his essay What is an Author22. Foucault argues that all authors are writers, but not all writers are authors – echoing a broadly shared sentiment in architecture: not every architecture is a building and not every building is architecture.

2.5 3D Neural Modeling:

Convolutional neural networks have been quite successful as image generators. For example, classification networks have been used successfully in the image manipulation techniques Deep dreaming23 and Neural style transfer24. Generative adversarial neural networks have been used in style transfer and attribute editors25.
There has been significant development of design techniques that hack these 2D rendering and editing methods to be used in the design process of 3D models. Examples include 2D to 3D Neural Style Transfer, 2D silhouette-based vertex optimization, and 3D Deep Dreaming proposed in Neural 3D Mesh Renderer26. Other impressive differentiable renderers that propose image-based mesh deformation frameworks include Pytorch3D27, SoftRas28, and DIB-R29. There has also been impressive work in jointly training 2D and 3D networks for image-guided mesh deformation and reconstruction.30,31,32 However, these techniques are inherently limited by the loss of rich information that occurs when compressing the 3D world into a 2D representation. Furthermore, it is difficult to control the object properties that are transferred in these processes, requiring significant post-processing on the part of the user to make the output realistic. 

In contrast, the proposed mesh-editing technique allows the user to choose the ‘feature scale’ on which the shape transfer occurs. The recent success of deep learning has inspired alternative methods for data-driven mesh deformation, synthesis, and reconstruction. The focus of these methods has typically revolved around simultaneously solving two subproblems. First is part or substructure deformation or transfer, and second, is the preservation of fine geometric surface detail. These methods typically leverage a generative, Variational Autoencoder framework to learn a latent space that disentangles varying desired properties of meshes33,34,35,36 . Many of these also leverage graph convolutions to construct these frameworks. The objective of these frameworks are to learn shape variation manifolds that can be used to sample latent representations that are used to generate 3D shapes. The drawbacks of some of these methods is that they require large datasets with semantic and part annotations to aid in the disentangling process, which are time consuming and difficult to collect35,36. A separate body of works forgo purely generative models and instead use deep neural networks to learn continuous extensions of traditional deformation methods.37 The proposed method differs from the above deep 3D generative models because it does not depend upon learning latent, compressed representations of meshes; we use label supervision from our dataset to guide the neural network to learn our desired parameter space. Instead, it takes an existing mesh as input and generates a variant by deforming it directly in 3D. As a result, the proposed method enables reusing existing 3D shape data with their associated meta-information, e.g., mesh color and texture, which can be carried along in the deformation. Note also that this work does not focus on part or substructure transfer, or transforming mesh pose, it focuses on modeling and transferring abstract artistic concepts and qualities that define an artists’ sensibility.

3 METHODS:

The proposed approach can be broken down into several steps: we start first by generating and labeling a large database (Fig.1) of obj mesh models of specific architectural classes: houses and columns, described in Section 1.1 below. We refer to this dataset as the Sensibility Dataset.  The next step is to train a graph convolutional neural network on this dataset to learn a mapping function between the cartesian coordinate space of 3D models to our defined label space of sensibility features, described in section 1.2 below.  The final step placing the trained graphCNN into an optimization framework that takes in a user-specified 3d models as input, and then deforms the vertices of this model to optimize the shape for a user-specified visual aesthetic or functionality label, described in Section 1.3 below. 

Figure 1: Renderings of the entire Sensibility Dataset. The first 626 objects are houses, and the remaining 1552 are columns. This dataset was designed by co-Author Matias del Campo  so that it captured his design sensibility. This included also the labeling of the dataset by the same hand, in order to mirror the design sensibility.
3.1 Database Construction:

The proposed approach can be broken down into several steps: we start first by generating large databases (Fig.1) of obj models of specific architectural classes. More specifically we divided them by their exterior volumes, houses and columns.  The models were generated using a series of techniques. The models started as low poly models generated with the standalone software TopMod – a small topological mesh modeling software. This allows the generation of lowpoly models that were designed to become either entire buildings (house, tower) or parts of the architecture (columns). These first generation models were saved as OBJ files and imported into Autodesk Maya in order to create enough variation to constitute the database. Imported models were deformed and mirror cuted in order to come to a number of around 1500 models. The original models were all created by the same authors. Using blend shaping in Maya, the author increased the number of varying models. In order to increase the number of models in Maya even more an assistant was instructed to randomly, and occasionally deform the models extremely in order to account for models that could be labeled with low functionality or low aesthetic value. In our case we relied on a set of models and their distortions (Mirrorcutting, deformations of the OBJ Models) in order to create variation in the Dataset.

Our label space decomposes the definition of  a ‘designer’s sensibility’ into the following properties/components: semantic, style, functionality, and aesthetics. The semantic property of a given data object defines the object’s concept/meaning. Within our dataset, it can take on the value of house or column. The style property of an object refers to the substructures of the object that are distinctive and define its appearance; they are determined by the theoretical/artistic principles that influenced the object’s end design. In our dataset, a given object can take on one of three styles: Style A, which is inspired by structures/features typically associated with baroque; Style B, which is defined by substructures that exist within classic architecture; and Style C, which is defined by features associated with cubism. The functionality property  is defined as the practicality of the object and its ability to serve a purpose well. Within our dataset, an object’s functionality property is a score of 1-5, where 1 is ‘not functional’ and 5 is ‘fully functional’.  The aesthetics property is defined as the subjective and sensori-emotional values, or sometimes called judgments of sentiment and taste, and is also a score on the scale of 1-5. We intentionally included ugly, and nonfunctioning models in order to train the GCNN in a synthetic process the difference between functional, non-functional and aesthetically pleasing or not. Both actions together form an organosynthetic process. In Fig.2, we show the distribution of the dataset in terms of its labels. 

Figure 2: The distribution of labels in the dataset. The labels were applied post-model generation, and thus the dataset is biased towards specific styles. Note that this is intentional and reflects the biases in the design sensibility of the author of the models in the dataset. 
3.2 Neural Optimization Framework:

The goal of this neural network is to learn a reasonable approximation of the function that describes a given architect’s design sensibility, i.e., a function that maps from 3D metric space to our semantic, style, aesthetic, and functionality label spaces. Due to the extreme differences in resolutions and model number for columns and houses, we split the Sensibility Dataset into these two semantic subsets and trained two separate semantic networks, one on house models, and one on column models. For each of these networks, we implemented a multi-task classification GCNN architecture, shown in Fig.3. It has four graph convolution layers, with feature dimensions of 128, 256, 256, 512. We then implemented a global average pooling layer that operates on the vertex dimension of the graph convolutional features. This output was fed into a shared fully connected layer, and this representation was input into three separate linear branches, one for functionality prediction, one for aesthetic prediction, and one for style prediction. Each branch is trained using standard cross entropy loss. The total loss is the sum over the loss from each task branch. Both networks were trained with a batch size of 32, a learning rate of 2e-4, with Adam optimization. They were trained until the loss converged.

Figure 3: A pictorial overview of the GCNN neural network architecture. 

Once the training process is over, the trained Sensibility prediction GCNNs will now act as our ‘designer’; they have learned the features associated with our sensibility label space. We now can fix the parameters of these networks and invert them to transfer user input labels (and the corresponding sensibility features) to input meshes.  A diagrammatic overview is given in Fig. 4.

Figure 4: A pictorial overview of the neural optimization framework. By inverting the trained GCNN, we can effectively transfer the learned sensibility features captured in the neural network’s parameters to an input mesh.

We set the input mesh vertices as variables, and specify a set of desired style, aesthetic, and functionality labels. Using gradient-based optimization, we can iteratively change the locations of the vertices of the input mesh (i.e., deform the input mesh to produce an output mesh) such that the final deformed mesh would produce the specified, desired output labels. Note that the mesh vertices are now being learned, and the network parameters and labels are fixed. Note that this process is similar to class-level deep dreaming38; we are directly manipulating the vertex locations of the input mesh in order to minimize the differences between the predicted sensibility labels and the user-input/target sensibility labels. For each iteration of the optimization, we use one of the fixed GCNNs to project the current mesh into label space. We compare the mesh label predictions to the user-specified/desired mesh labels by calculating the cross entropy loss between them. This error is then backpropagated through the network into 3D space, where we now have an error value for each vertex, which represents how much each vertex contributed to the mesh prediction. These values are used to deform the vertex locations of the mesh in 3D space. We perform iterations until the loss value has converged. For example, a user could input in a plain rectangular column, specify that the style and aesthetic quality of the column should be optimized, and the framework will iteratively deform the mesh to resemble the learned aesthetics and style captured by the dataset. In the following section, we  demonstrate  that this optimization framework will fairly accurately mimic/approximate the sensibility of the architect that labeled the 3D model dataset, Matias del Campo, and will thus allow users to deform meshes in the same way that this architect would edit and modify the input mesh themselves. 

4 EXPERIMENTS AND RESULTS

To evaluate the proposed method’s ability to transfer Matias del Campo’s ‘design sensibility’, we present the following qualitative experiments. We first present extensive examples of the sensibility-optimized meshes using the proposed framework. We then present qualitative ablation experiments manipulating the different sensibility axes of our label space to investigate the quality of the transfer. We then present an experiment where we use a network interpretability technique, visual saliency, to elucidate potential mesh sub structures that could uniquely contribute to aesthetics, functionality, or style. Note that our final two analyses focus on houses for brevity.

Figure 5: Examples of deformations produced by the proposed framework. On the left hand side are examples of various simple geometries deformed to be columns of various styles using the proposed framework. The first image panel on the left hand side is a icosahedron deformed to a column with Style B, the second/middle image panel on the left hand side is a cube deformed into a column with Style C,  and the bottom image panel on the left hand side is a cylinder deformed to a column with StyleB . On the right hand side are examples of various input shapes deformed into houses, with the the top image panel showing a dodecahdron deformed to a Style C house, the middle image panel showing an octahedron deformed to a Style A house, and the bottom panel showing an icosahedron deformed to a Style C house. All deformed models used a high aesthetic and medium functionality value.

For the columns, we see that the network has been able to learn to elongate and thin out the input shapes to capture the global structure that is typically associated with columns. Similarly, with the houses, we see that they retain volumes similar to their original shapes, and have surfaces that could be considered ‘floor-like’.

Our next experiment investigated the manipulation of each of the different label spaces while holding the others constant. This was done in an effort to examine the quality of the internal models learned by the GCNN for each of these feature axes.  For this experiment, we focused on the generation of different houses from an octahedron mesh. For each of the different labels, style, aesthetics, and functionality, we fixed all other parameters and varied one. The outputs are presented in Fig. 6.  For varying styles, which is in the top row of the figure, we set aesthetics and functionality to have ratings of 4 and held those values constant while altering the style. We see that the final results have distinctly different forms.  For varying functionality, shown in the middle row of the figure, we set the style to be Style C, and fixed aesthetics to have a rating of 4 while varying the functionality rating. For varying aesthetics, which is shown in the bottom row, we set the style to be Style C, and fixed the functionality to be 3 while varying the aesthetic rating. The classification of Style A, B and C in terms of its Style relied on a set of specific rules derived from Baroque, Classic and Cubist architecture. For example, Baroque can be classified through features such as symmetry, curvature and concave/convex spiel. Classic can be read here as Classicist or Modern, where Modern relies on the formal and Stylistic qualities of high Modern architecture from the 1920’s to the 1950’s (proportionality, orthogonal, asymmetrical). Cubist as pertaining to the triangulation of polygonal bodies, akin to features found in Czech Cubist architecture. This of course is a blatant simplification of each of these styles, and rather an innate response from the creator of the database – producing a less scientific but rather spontaneous response to visual stimuli. The structure of aesthetic and functional considerations follow a similar formula in that they are extensive in their nature but rather intensive in regards to the labeling process of the dataset. As in the previous experiment, we observe that semantic and stylistic shapes are the dominant features transferred. The fidelity of the aesthetics and functionality transfer is much more nuanced due to their significantly more subjective nature, particularly with regards to the qualities that make a different style of houses more or less functional or aesthetic to the designer and labeler of the dataset. The gauge of functionality has to do with the resulting model proportions; if the resulting figure has the wrong proportions in terms of designing a house it is “less functional” (the figure is too low, too high, too narrow in order to accommodate the program of a house). We can see this manifest in the models as the functionality increases; the models go from ‘too high’ walls to spread out into flattened roof and ceiling features as the functionality score is increased. Aesthetics is far more difficult to capture objectively because it really relies on the labeler’s unique and subjective sensibility. For the shape generated with high aesthetics (a rating of 5), it has well proportioned length to width to height. This means that the particular model can be scaled proportionally and could fulfill various programs, from House to, for example, Concert hall. The relationship between concave and convex parts is also nicely balanced, giving an allover even figure. The model silhouette is exciting, without being overly aggressive, despite lots of pointy elements.

Figure 6: The outputs. Style, functionality and aesthetics do not necessarily follow objective criteria, but the criteria put into place by Matias del Campo. For example the result of varying functionality of a rating of 5 produces a volume that can be converted into a low slung house on a hillside – see final rendering.  

From the previous two results, we have demonstrated that this model has effectively learned ‘architectural taste/aesthetics’ as defined by the architect and labeler of the dataset. However, what is unclear is if the neural network will learn to associate the same visual features with ‘pleasing’ as an architect does. To further our investigation into GCNNs capabilities of modeling sensibility and how they map onto an architect’s, we performed the following network interpretability experiment to shed light upon what features of the input meshes the neural network learns to associate with the different abstract concepts of style, aesthetics, and functionality. In this work, we extend a particular interpretability technique, Visual Saliency9 to GCNNs. This will allow us to qualitatively/visually examine the structural features that our Sensibility network associates with different style, aesthetic, and functionality classes. 

We can use gradient-based visual saliency mapping to assign an intensity value (or color) to each vertex in our input mesh. This intensity will correspond to how much that given vertex contributes to a particular class prediction. To calculate the visual saliency for a given input mesh, we first perform a forward pass of the input mesh using the trained Sensibility prediction network. Then, choosing one of the task predictions, we calculate the backward pass to get the gradient of that prediction with respect to the input mesh vertices. The normalized gradients serve as our vertex colors. Using this technique, we can shed light upon what features of the input meshes the neural network learns to associate with the different abstract concepts of style, aesthetics, and functionality. Examples for three house models are shown in Fig. 7.

Figure 7: Examples of the visual saliency/vertex coloring outputs for house models in the dataset. 

We see complex, shape-dependent relationships between the features for style, aesthetics, and functionality. For example, in the top row, it appears that the Sensibility prediction network is using different and unique vertex features for each task. However, in the middle row it appears that it leverages very similarly vertex clusters for predicting style, functionality, and aesthetics. Finally, in the last layer, we see that the same vertices have positive impact upon the style and aesthetics predictions, but not the functionality. This highlights the difficulty of trying to quantify and encode highly intertwined design concepts. Further work will be needed to see if it is possible to disentangle these abstract properties from one another. 

5 DISCUSSION AND CONCLUSION

In conclusion, it can be stated that provided a large dataset of 3d models, produced by one hand and distorted to create more variation, a Graph CNN can interrogate those models for underlying rulesets. These rulesets can generate architectural results that comply with the aesthetic criteria of the user. There are two main paths of inquiry to be considered when assessing an algorithm’s ability for design processes: first, the interrogation of the technical expertise necessary to train neural networks to generate successful solutions for pragmatic problems. This can be plan optimization, structural optimization and the analysis of the consumption of material.  The second path to be explored is the aspects of architectural design pertaining to studies of morphology, style, atmosphere and creativity. Our goal was to test the capabilities of neural networks to model and transfer the highly abstract and complex concept of a designer’s sensibility. Leveraging the powerful ability of neural networks to ingest and learn from large databases of models that can span cultural and historical dimensions that a human or humans would take a lifetime to synthesize and learn, we present a solution that allows for the transfer of aesthetic, style, and functionality features to meshes to generate new objects that capture the design sensibility of our co-author, Matias del Campo. We show a finalized rendering of one of the Sensibility transferred cubist houses in Fig. 8.

Figure 8: An example of one of the cubist house outputs.

Future work would be to collect larger, more comprehensive datasets (possibly from different designers/architects), which would allow for a more thorough investigation into how neural networks represent these abstract artistic concepts. Another avenue of future work is to explore the feature spaces of generative networks on these kinds of datasets. In conclusion it can be stated that 3D Graph Convolutional Neural Networks opens avenues for architecture design that allows to interrogate the wicked problem of architecture design (sensibility, aesthetics) as well as the tamed problem (program, organization). To the surprise of the authors, when using Databases of Baroque or Modern Architecture plans to design new projects, the results don’t look Baroque or Modern. They result in something new, different, alien, strange and wonderfully beautiful – maybe the first genuine 21st century architecture.

6 REFERENCES

1: Carpo M., The Digital Turn in Architecture 1992-2012, AD Reader, John Wiley & Sons, West Sussex, UK, 2013, pp. 8-14

2: de Landa, M., Assemblage Theory, Edinburgh University Press, Edinburgh, UK, 2016, pp.44-45

3: de Landa, M., Assemblage Theory, Edinburgh University Press, Edinburgh, UK, 2016, pp.20-21

4: See for example del Campo, Matias. Carlson, Alexandra, Manninger, Sandra. Machine Hallucinations,  

5: Wu, Zonghan, et al. “A comprehensive survey on graph neural networks.” IEEE Transactions on Neural Networks and Learning Systems (2020).

6: Eisenman, P., Architecture and the Problem of the Rhetorical Figure, Architecture and Urbanism (A+U) No.202 (July 1987), Tokyo, Japan, pp. 16-22 

7: Vidler A., ed., Architecture’s Expanded Field in Architecture Between Spectacle and Use, Sterling and Francine Clark Art Institute, Williamstown, Mass, 2008, pp 143-154

8: Baumgartner A. G., Aesthetica, J.C. Kleyb, Traiecti cis Viadrum (Frankfurt), 1750, §1

9: Kant, I., Kritik der Reinen Vernunft, Verlag Johann Friedrich Hartknoch, Riga. 1781, A21 Note

10: Leo Tolstoy was an ardent critic of that notion and stated in his book “What is Art?” that Baumgartners trinity of Good, Truth and Beauty “have no definite meaning, but they hinder us from giving any definite meaning to existing art….”

11: Scruton, R., Why Beauty Matters (TV Documentary) directed by Louise Lockwood, BBC2 2009

12: See for example Yael Reisner’s “Beauty Matters” Tallinn Biennale 2019

13: Berleant A., Aesthetic Sensibility, Ambiances [En ligne], Varia, mis en ligne le 30 mars 2015, consulté le 24 novembre 2020. URL : http://journals.openedition.org/ambiances/526; DOI: https://doi.org/10.4000/ambiances.526

14: See also: Davidson, D., Essays on Actions and Events: Philosophical Essays Volume I, Clarendon Press, Oxford, UK, 2001 

15: Speight, A., Hegel, Literature and the Problem of Agency, Cambridge University Press, Cambridge, England, UK, 2001 

16: Pratten, S. Structure Agency and Marx’s analysis of the labor process. Review of Political Economy, 5:4, 403-426, DOI: 10.1080/09538259300000029, 1993

17: G. W. F. Hegel, Aesthetics. Lectures on Fine Art, trans. T. M. Knox, 2 vols. Oxford: Clarendon Press, 1975.

18: Brosio, R. A., Chapter Three: Various Reds: Marx, Historical Materialism, Critical Theory, and the Openness of History. Counterpoints 75 (2000): 79-120. Accessed December 7, 2020. http://www.jstor.org/stable/42976139.

19: Millican, P., Hume’s Determinism. Canadian Journal of Philosophy 40, no. 4 (2010): 611-642. muse.jhu.edu/article/411607.

20: Hertzmann, A., Visual Indeterminacy in GAN Art, Leonardo / SIGGRAPH 2020 Art Papers; Leonardo, Volume 53, Issue 4, August 2020

21: Barthes, R., The Death of the Author, Essay 1967, in Sontag S., ed. A Barthes Reader. New York: Hill and Wang, 1982

22: Foucault, M., What is an Author? in Faubion, J.D ed, Aesthetics, Method and Epistemology, The New Press, New York, USA 1998, pp.205-222

23: Mordvintsev, Alexander; Olah, Christopher; Tyka, Mike (2015). DeepDream – a code example for visualizing Neural Networks. Google Research. Archived from the original on 2015-07-08.

24: Kato, Hiroharu, Yoshitaka Ushiku, and Tatsuya Harada. “Neural 3d mesh renderer.” Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2018.

25: Gatys, Leon A., Alexander S. Ecker, and Matthias Bethge. A neural algorithm of artistic style. arXiv preprint arXiv:1508.06576 (2015).

26: Huang, X., Ming-Yu L., Belongie ., and Kautz J., Multimodal unsupervised image-to-image translation. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 172-189. 2018.

27: Ravi, Nikhila, et al. “Accelerating 3d deep learning with pytorch3d.” arXiv preprint arXiv:2007.08501 (2020).

28:Liu, Shichen, et al. “Soft rasterizer: A differentiable renderer for image-based 3d reasoning.” Proceedings of the IEEE International Conference on Computer Vision. 2019.

29: Chen, Lei, and Jiying Zhao. “Quality evaluation of DIBR 3D images based on blind watermarking.” Multimedia Systems 25.3 (2019): 195-211.

30: Wang, W., Duygu C.,, Radomir M., and Neumann U.,. 3dn: 3d deformation network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1038-1046. 2019.

31: Groueix, T., Fisher, M., Kim, G. V.,, Russell, B. C., and Aubry M., A papier-mâché approach to learning 3d surface generation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 216-224. 2018.

32: Wang, N., Zhang, Y., Li, Z., Fu, Y., Liu, W., Jiang, Y.,. Pixel2mesh: Generating 3d mesh models from single rgb images. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 52-67. 2018.

33: Gao, L., Yang, J., Qiao, Y., Lai, Y., Rosin, P. L.,, Xu, W., Xia, S.. Automatic unpaired shape deformation transfer. ACM Transactions on Graphics (TOG) 37, no. 6 (2018): 1-15.

34: Tretschk, E., Tewari, A., Zollhöfer, M., Golyanik, V., Theobalt, C.,. Demea: Deep mesh autoencoders for non-rigidly deforming objects. In European Conference on Computer Vision, pp. 601-617. Springer, Cham, 2020.

35: Gao, L., Yang, J., Wu, T., Yuan, Y., Fu, H., Lai, Y., Zhang, H.,. SDM-NET: Deep generative network for structured deformable mesh. ACM Transactions on Graphics (TOG) 38, no. 6 (2019): 1-15.

36: Kaichun, M., Guerrero, P., Yi, L., Su, H., Wonka, P., Mitra, N., and Guibas, L. J., Structurenet: Hierarchical graph networks for 3d shape generation. arXiv preprint arXiv:1908.00575 (2019).

37: Park, J.J., Florence, P., Straub, J., Newcombe, R., Lovegrove, S., Deepsdf: Learning continuous signed distance functions for shape representation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 165-174. 2019.

38: Mordvintsev, A., Olah, C., and T., Mike. Inceptionism: Going deeper into neural networks. (2015).

39: Olah, Chris, Arvind Satyanarayan, Ian Johnson, Shan Carter, Ludwig Schubert, Katherine Ye, and Alexander Mordvintsev. “The building blocks of interpretability.” Distill 3, no. 3 (2018): e10.

40: Olah, C., Mordvintsev, A., Schubert, L., Feature visualization. Distill 2, no. 11 (2017): e7.41: Mordvintsev, A., Olah, C, Tyka, M., Inceptionism: Going deeper into neural networks. (2015).

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: