Artificial intelligence has radically transformed the way images are produced, perceived, and valued. No longer just tools for editing or enhancement, AI models such as MidJourney, Stable Diffusion, and DALL·E have become autonomous image generators, blurring the boundaries between photography, illustration, and computational aesthetics. The emergence of AI-generated images raises profound questions: How does AI alter artistic creativity? What happens to realism when images are synthesized from statistical models rather than captured from reality? Can AI-generated images develop their own aesthetic identity, or are they trapped in an endless loop of algorithmic remixing?
At the heart of these discussions is a fundamental shift: AI-generated images no longer function as mere representations of the world but as complex outputs shaped by data-driven processes. Unlike traditional photography, which has long been associated with indexicality and evidence, AI-generated images exist in a liminal space where meaning is inferred rather than captured. The “latent space” of AI—where images are mapped according to probabilistic associations—has become a new site of aesthetic exploration, challenging our historical understanding of images as records of reality.
Recent debates in media theory, visual culture, and digital art criticism highlight several transversal axes that define the aesthetics of AI-generated images. These include the reconfiguration of meaning, the crisis of realism, the automation of artistic creation, the socio-political stakes of synthetic media, and the historical continuities that shape AI aesthetics. By exploring these themes, we can better understand how AI-generated images reshape the visual landscape and redefine what it means to create and perceive images in the digital age.
In the following sections, we will unpack these five key axes, offering a comprehensive look at the evolving aesthetics of AI-generated images and their implications for contemporary visual culture.
1. The Reconfiguration of Meaning in AI-Generated Images
One of the most striking aspects of AI-generated images is the way they redefine the very concept of meaning in visual culture. Unlike traditional images—whether painted, photographed, or digitally manipulated—AI images are not direct representations but statistical reconstructions of patterns found in vast datasets. This has profound implications for how we understand meaning, interpretation, and communication in an era where images are no longer simply captured or created by human hands but instead synthesized by algorithms operating within a latent space.
1.1 AI-Generated Images as Metapictures
Aimo Lorenzo (2024) argues that AI-generated images function as “metapictures”, borrowing from W.J.T. Mitchell’s (1995) concept of images that reflexively question their own status as pictures. According to Lorenzo, AI models like Stable Diffusion generate images that outwardly resemble traditional photographic or artistic styles, but the process behind them is fundamentally different: instead of being shaped by direct interaction with reality, AI-generated images are assembled from pre-existing visual data, processed through algorithms that compress and categorize meaning.
This raises a key question: Do AI-generated images truly introduce new visual forms, or do they merely remix existing ones? Antonio Somaini (2024) suggests that the latent space of AI models plays a crucial role in this process, acting as a vast computational archive where meaning is not simply stored but mathematically measured. Somaini notes that language plays an increasingly dominant role in image production, as text prompts shape what is visualized. This suggests a major shift in visual culture theory, where words become the structuring force behind images, reversing the historical relationship between text and image in traditional media.
1.2 The Latent Space as a Cartography of Meaning
The latent space of AI image models is often described as a cartographic system where meaning is mapped according to statistical relationships. Lorenzo (2024) references Pasquinelli and Joler’s (2021) concept of AI as a nooscope, a navigational instrument for mapping knowledge, drawing a parallel between AI’s meaning-making process and geographical exploration. This metaphor is significant because it highlights how AI-generated images do not function indexically—as direct representations of reality—but relationally, as coordinates in a vast conceptual space.
This conceptualization aligns with Parikka’s (2023) observation that “the images that measure also measure measuring.” In other words, AI-generated images are not just representations but active agents in the measurement and categorization of visual meaning. Their very existence reflects the biases and structures of their training data, reinforcing a feedback loop where meaning is derived from computational logic rather than from direct experience or perception.
1.3 The Problem of “Dumb Meaning” in AI Images
Despite their structured approach to meaning, AI-generated images also expose the limitations of algorithmic signification. Bahjor (2023) introduces the concept of “dumb meaning”, arguing that meaning in AI-generated images is not grounded in experiential or cultural context but instead the product of multiple algorithmic mediations. This issue is further explored by Paglen and Crawford (2021), who critique the political implications of assuming that verbal and visual signs correspond unequivocally. In AI image synthesis, this assumption can lead to systematic biases—for example, the reinforcement of racial and gender stereotypes due to the way AI models categorize and weigh different visual elements.
By examining the mechanisms that structure latent space, we begin to see that AI-generated images do not contain meaning in the way traditional images do. Instead, they operate within a computational framework that prioritizes pattern recognition over semantic depth, raising concerns about the loss of contextual richness in AI-generated visual culture.
1.4 AI Images as a Shift from Representation to Simulation
The emergence of AI-generated images marks a shift from representation to simulation—a process that resonates with Jean Baudrillard’s (1981) theory of hyperreality. Instead of referring to an external reality, AI images exist within a closed system of data-driven visual synthesis. As a result, their relationship to meaning is fundamentally different from that of traditional images.
This is particularly relevant when considering the automation of artistic and photographic styles. Lorenzo (2024) points out that AI-generated images are often designed to mimic the surface aesthetics of painting, photography, and illustration, yet they do so without engaging in the cultural, historical, or material contexts that originally shaped those styles. The result is an aesthetic that appears familiar but is entirely detached from its historical origins.
This detachment is a defining feature of AI-generated aesthetics, as seen in Somaini’s (2024) analysis of how text prompts shape AI image production. When AI generates an image of a Renaissance painting, it is not reproducing the artistic intentions of Renaissance painters but rather simulating their visual characteristics based on statistical correlations. This means that AI images function more as simulations of meaning rather than as representations of reality, making their interpretation fundamentally different from traditional images.
Conclusion: The Algorithmic Mediation of Meaning
The reconfiguration of meaning in AI-generated images forces us to rethink what it means for an image to be meaningful. As Lorenzo (2024) and Somaini (2024) illustrate, meaning in AI-generated images is shaped not by direct representation but by the logic of latent space, where words and numerical data structure visual outcomes. This shift has far-reaching implications:
- It challenges traditional notions of indexicality and evidence, as AI-generated images do not document reality but synthesize plausible representations.
- It redefines the role of the artist from a creator of meaning to a curator of prompts, influencing the way AI interprets visual patterns.
- It raises concerns about bias and decontextualization, as AI models structure meaning based on pre-existing datasets, often reinforcing problematic visual conventions.
By analyzing AI-generated images not just as aesthetic objects but as algorithmic constructs, we gain a deeper understanding of how meaning is produced, circulated, and reconfigured in the digital age. In the next section, we will explore how this shift in meaning relates to the broader crisis of realism and representation in AI-generated imagery.
2. The Crisis of Realism and Representation in AI-Generated Images
One of the most profound disruptions introduced by AI-generated images is the crisis of realism in digital aesthetics. Historically, images—whether painted, photographed, or rendered—have been assessed in relation to their ability to represent reality. Even with the advent of digital manipulation, photography retained its indexical link to the physical world, making it a dominant form of visual evidence (Jacob, 2024). However, with AI image synthesis, the very notion of realism is destabilized, transforming it from an epistemic standard into a mere stylistic effect.
This shift raises critical questions: Can AI-generated images still be considered realistic if they are simulations rather than records of reality? What happens to the concept of “photographic truth” when realism is no longer tied to material causality but to algorithmic approximation? Several researchers at the Aesthetics of Digital Image Synthesis conference explore these questions through the lens of media theory, visual culture, and historical shifts in image production.
2.1 Realism as a Style, Not an Index
Historically, photography’s authority as a record of reality stemmed from its indexical relationship with the world—it was a medium that captured rather than created images (Jacob, 2024). Even as digital photography introduced new possibilities for manipulation, the widespread belief that “photographs don’t lie” persisted in popular culture. The phrase “Pics or it didn’t happen”, which encapsulates this reliance on photographic proof, exemplifies how deeply realism has been linked to photographic mechanical objectivity.
However, as Jacob (2024) argues, AI-generated images decouple realism from its traditional foundations, reducing photographic realism to aesthetic imitation. Instead of being tied to real-world events, AI-generated images produce plausible visual scenarios based on pattern recognition rather than direct observation. This transformation means that realism is no longer a guarantee of truth—it becomes just another stylistic option within AI image generation.
Somaini (2024) similarly notes that text-to-image models operate on the assumption that verbal prompts can replace real-world referents, leading to a situation where an image “looks like” a photograph but has no direct relationship to actual events. This shift marks a rupture in visual culture: while photography once functioned as evidence, AI-generated images function as probabilistic simulations, constructing their realism through statistical inference rather than physical causality.
2.2 The Proxy-Real: AI Images as Substitutes for Reality
Krešimir Purgar (2024) introduces the concept of the proxy-real to describe how AI-generated images function as substitutes rather than representations. He argues that images have never been simple mirrors of reality but have always acted as proxies, standing in for something rather than directly depicting it. In this sense, AI-generated images are simply an extension of this logic, but with a key difference: instead of being produced through human intentionality, they are assembled through automated computation.
This proxy-realism poses philosophical and ethical challenges. If AI-generated images become indistinguishable from traditional photography, what happens to our ability to differentiate between documentary and synthetic imagery? The increasing use of AI-generated content in journalism, marketing, and entertainment raises concerns about how realistic AI imagery might be exploited to manufacture false narratives. The crisis is not merely aesthetic but also epistemological: if realism is no longer tied to real-world reference points, can we still trust images as sources of knowledge?
Jacob (2024) situates this discussion within a historical critique of late capitalism, arguing that AI-generated realism serves a new function in digital economies. Instead of verifying reality, AI-generated images function as on-demand visual commodities, tailored to individual desires rather than factual accuracy. This shift mirrors broader trends in consumer culture, where visual consumption is increasingly detached from historical or material referents.
2.3 The Problem of Visual Homogenization
Another key issue raised by AI-generated realism is the aesthetic homogenization of images. Florian Cramer (2024) warns of a coming “crapularity”, in which generative AI leads to an oversaturation of synthetic imagery, creating a landscape where AI-produced content dominates the visual field. Because AI models rely on pattern recognition and statistical averaging, the realism they produce is not necessarily diverse or innovative—instead, it is formulaic, repetitive, and optimized for maximum engagement.
This problem is exacerbated by the way AI systems are trained. As Rozenberg (2024) notes, AI-generated images are constrained by the datasets they are built on, meaning that certain visual tropes and styles become more dominant over time. This results in a feedback loop, where AI-generated realism is shaped by pre-existing aesthetic biases, reinforcing a narrow, standardized version of visual culture.
Bernadette Krejs (2024) explores this issue in the context of architectural visualization, where AI-generated images of the home often replicate Western-centric aesthetics rather than offering alternative representations. She argues that this reveals a fundamental limitation of AI image models: despite their supposed open-ended creativity, they tend to reproduce dominant cultural and visual norms, leading to a narrowing of aesthetic possibilities rather than an expansion.
2.4 The Decline of Photographic Authority
The declining authority of photography in the face of AI-generated realism is not just a technological shift—it also represents a cultural transformation in how we perceive images. Purgar (2024) argues that we are witnessing the final stages of a long transition from the pictorial to the post-pictorial condition, in which images are no longer judged by their connection to physical reality but by their effectiveness in constructing aesthetic and conceptual experiences.
Masoudi (2024) highlights the tension between digital-realism and poor images, drawing from Hito Steyerl’s (2009) concept of the “poor image” to argue that low-resolution, user-generated videos resist the perfectionism of AI-generated realism. He suggests that, paradoxically, the glitches and imperfections of amateur digital media have become new markers of authenticity, standing in opposition to the hyper-smooth, hyper-real aesthetic of AI-generated images.
This contrast raises important questions about the future of visual culture:
- Will AI-generated realism eventually replace traditional photography, making all images suspect?
- Will aesthetic imperfection—grain, blur, compression artifacts—become new indicators of truth and authenticity?
- How will photographic realism evolve as AI continues to redefine the boundaries between representation and simulation?
Conclusion: The End of Realism, or a New Phase?
The crisis of realism in AI-generated images is not simply about technological advancement—it is about how visual culture is restructured in response to algorithmic automation. As Jacob (2024) and Purgar (2024) illustrate, realism is no longer tied to indexicality but to statistical plausibility, turning photographic truth into a style rather than an epistemic standard.
At the same time, as Masoudi (2024) suggests, counter-movements in digital culture are already emerging, resisting the aesthetic dominance of AI-generated hyperrealism. Whether through low-resolution imagery, experimental photography, or digital art that exposes the limitations of AI, artists and theorists alike are exploring ways to critique and counteract the homogenization of AI-generated visual culture.
In the next section, we will explore another critical dimension of AI aesthetics: the automation and standardization of creativity, and how AI-generated images are reshaping the role of the artist in contemporary digital culture.
3. The Automation and Standardization of Creativity
One of the most debated aspects of AI-generated images is the automation of artistic production and the potential standardization of creativity. Historically, artistic creation has been understood as a deeply human endeavor, shaped by individual intuition, cultural context, and material constraints. However, the rise of generative AI models—such as Stable Diffusion, DALL·E, and MidJourney—has fundamentally altered this paradigm by introducing algorithmic automation into the creative process. This raises critical questions:
- Is AI expanding or restricting artistic expression?
- Does AI-generated art democratize creativity, or does it reinforce pre-existing aesthetic biases?
- What happens to artistic originality when AI is capable of endlessly remixing styles and influences?
At the Aesthetics of Digital Image Synthesis conference, several researchers tackled these concerns, highlighting both the transformative potential and the limitations of AI in creative practices.
3.1 The Kaleidoscopic Constraint: AI’s Limits in Creative Innovation
Florian Cramer (2024) critiques AI’s generative capacity as being inherently constrained by its structural logic. He argues that AI does not create but recombines, endlessly remixing existing visual materials in a process that he likens to a “glorified kaleidoscope”. While generative AI can produce new variations of known styles, it lacks the ability to introduce qualitative ruptures or conceptual breakthroughs in art.
Cramer’s argument is that generative AI is structurally incapable of true innovation because it operates within predefined datasets. This results in an aesthetic feedback loop, where AI-generated images repeat and reinforce dominant styles, rather than introducing new visual paradigms. He warns that as AI-generated content floods digital platforms, we risk entering a “crapularity”, where cheap, automated, and highly repetitive AI-generated images dominate visual culture.
This critique aligns with the concerns of Rozenberg (2024), who emphasizes the self-referential nature of AI-generated images. He argues that, unlike human artists who draw from lived experience, historical research, and personal experimentation, AI models are constrained by their training data. This means that AI-generated creativity is not a process of discovery but of statistical probability, reinforcing pre-existing norms rather than generating radical new forms.
3.2 From Creative Expansion to Predictable Patterns
While AI-generated images may appear highly diverse, their underlying process of generation is marked by strong constraints. Lotte Philipsen (2024) argues that this is due to how AI systems conceptualize creativity: rather than producing images based on emergent, unpredictable artistic impulses, they rely on pattern recognition and probabilistic modeling to predict what “looks good” according to historical and aesthetic norms.
This results in a paradox:
- On one hand, AI democratizes image-making, allowing anyone to create complex visuals without technical skill.
- On the other hand, it reinforces existing aesthetic conventions, making it difficult for truly novel artistic styles to emerge.
A particularly relevant example of this phenomenon is explored by Bernadette Krejs (2024), who examines how AI-generated images of domestic spaces tend to reproduce Western-centric, highly aestheticized, and commercialized representations of the home. She argues that this reflects not an inherent limitation of AI, but a bias embedded in its training data, which prioritizes the most popular and widely circulated visual styles.
Similarly, Gucher (2024) highlights the ways in which AI-generated kitsch—often seen in hyper-polished, “perfect” digital renderings—extends aesthetic strategies of Pop Art while also limiting them. While Pop Art, as exemplified by Warhol and Koons, deliberately played with kitsch and consumer imagery, AI-generated images unconsciously reproduce these tropes, creating banal, highly artificial aesthetics rather than critical or ironic interventions.
This tendency toward predictable repetition is what Cramer (2024) identifies as the “structural limitation of generative AI”: unlike human artists, who can break rules and introduce radical new forms, AI is locked into a system of probabilistic decision-making, where the “best” image is the one that most closely matches pre-existing stylistic norms.
3.3 The Changing Role of the Artist: From Creator to Curator
As AI tools become more integrated into artistic workflows, the role of the artist is shifting. Rather than being the sole creator of an image, the artist now acts as a curator, editor, and prompt engineer, guiding AI systems to produce desired outputs.
Merzmensch (2024) describes this shift as the “post-anthropocentric turn” in creativity, where human and machine collaboration replaces traditional notions of individual artistic genius. He argues that AI’s ability to synthesize vast amounts of visual data offers artists new ways to explore patterns, associations, and stylistic variations that would be difficult to achieve manually.
This idea is further developed by Guillermet (2024), who explores the relationship between code and image in AI-based art. She argues that AI art continues the legacy of early computer art, where the aesthetic value is not just in the final visual output but in the algorithmic process itself. The tension between automation and intentionality is at the heart of AI aesthetics: while AI can produce technically sophisticated images, their artistic meaning still depends on human intervention, selection, and interpretation.
However, not all scholars view this shift positively. Rozenberg (2024) warns that as AI-generated content becomes more prevalent, the skills associated with traditional artistic creation may become devalued. If AI can generate high-quality images in seconds, what incentive remains for artists to develop their own unique styles and techniques? This echoes long-standing concerns about automation and labor in creative industries, where AI threatens to replace human workers rather than assist them.
3.4 AI as a Tool for Creative Augmentation
Despite these concerns, some researchers see AI as an opportunity rather than a threat. Gucher (2024) argues that rather than viewing AI as a replacement for human creativity, it should be understood as a tool for creative augmentation—one that expands artistic possibilities rather than constraining them.
He points to contemporary AI artists such as Julian van Dieken, who use AI-generated images to reinterpret historical artistic traditions, blending classical painting styles with contemporary pop culture motifs. This suggests that while AI may automate certain aspects of image-making, it can also serve as a catalyst for new forms of hybrid creativity, where human intuition and computational processes work in tandem.
This perspective is shared by Sartori (2024), who examines how AI-generated avatars and digital reincarnations in contemporary art challenge traditional ideas of selfhood and embodiment. She argues that AI offers artists new ways to experiment with identity, narrative, and aesthetic form, rather than merely reproducing past styles.
Conclusion: The Future of AI-Generated Creativity
The automation of creativity represents both a new frontier and a potential risk for artistic production. While AI democratizes access to visual creation, it also raises concerns about aesthetic homogenization, artistic devaluation, and the limits of generative systems.
Key tensions include:
- The risk of oversaturation, where AI-generated images dominate digital culture but fail to introduce genuine innovation.
- The transformation of the artist’s role, from creator to curator, as human-AI collaboration reshapes creative workflows.
- The potential for creative augmentation, where AI is used not as a substitute for human creativity but as a tool for expanding artistic possibilities.
As we move into a future where AI-generated images become increasingly ubiquitous, the challenge will be to balance automation with artistic agency, ensuring that generative tools serve as instruments of creative exploration rather than mere machines of repetition.
In the next section, we will turn to the socio-political and ethical implications of AI-generated images, examining how bias, labor dynamics, and power structures shape the aesthetics of synthetic media.
4. The Socio-Political and Ethical Implications of AI-Generated Images
As AI-generated images become increasingly prevalent, their socio-political and ethical dimensions come under scrutiny. While AI offers new creative possibilities, it also raises profound concerns about bias, labor exploitation, surveillance, and the reinforcement of dominant power structures. AI-generated images are not just aesthetic artifacts—they are products of algorithmic mediation shaped by social, economic, and political forces.
At the Aesthetics of Digital Image Synthesis conference, scholars examined how AI-generated images intersect with colonial histories, economic inequalities, and gendered and racialized biases. The discussion focused on three main issues:
- Bias and Reinforcement of Stereotypes
- The Political Economy of AI-Generated Images
- The Role of AI-Generated Images in Digital Surveillance and Propaganda
4.1 Bias and the Reinforcement of Stereotypes
One of the most urgent ethical concerns surrounding AI-generated images is the replication and amplification of bias. AI image models are trained on massive datasets scraped from the internet—datasets that reflect the structural inequalities of the societies that produce them. This means that gendered, racial, and cultural biases are encoded into AI-generated images, often in ways that reinforce stereotypes and exclusionary visual norms.
Krejs (2024) examines how AI-generated depictions of domestic spaces tend to prioritize Western, upper-middle-class aesthetics, reinforcing a monocultural and consumerist vision of “home”. She argues that this bias is not just an incidental flaw but a structural feature of AI training data, which disproportionately draws from commercial platforms like Instagram and Pinterest. As a result, AI does not generate a diversity of homes but instead repeats and amplifies dominant representations, excluding non-Western, working-class, or marginalized perspectives.
A similar concern is raised by Masoudi (2024), who investigates the tension between high-resolution AI-generated realism and the “bastard poor images” of digital subcultures. He argues that AI’s perfectionist visual approach erases the gritty, imperfect aesthetics of amateur digital media, which have long been used by marginalized communities as counter-hegemonic tools. In doing so, AI risks flattening cultural differences and erasing alternative ways of seeing and representing the world.
Schober (2024) extends this critique to patterns of audience-address in AI-generated images, noting that AI-generated portraits often conform to commercialized visual tropes, such as frontal gaze, idealized lighting, and symmetrical composition. These tropes, she argues, reflect a deep-seated visual bias rooted in advertising and social media culture, further reinforcing normative ideals of beauty, race, and gender.
Paglen and Crawford (2021) have previously critiqued these semiotic assumptions, arguing that AI operates on the flawed premise that verbal and visual signs correspond unequivocally. This assumption not only distorts the complexity of visual meaning but also reinforces hegemonic ways of seeing, privileging Eurocentric, heteronormative, and capitalist visual frameworks.
4.2 The Political Economy of AI-Generated Images: Labor, Automation, and Exploitation
Beyond its aesthetic and epistemic implications, AI image synthesis is also embedded in a broader system of economic extraction and labor automation. The rapid expansion of AI-generated images has disrupted creative industries, raising concerns about job displacement, data colonialism, and digital labor exploitation.
Jacob (2024) situates AI-generated images within a larger history of automation under late capitalism, arguing that generative AI functions as a new phase in the division of labor. Whereas traditional photography and digital imaging required skilled human labor, AI image synthesis automates visual production, reducing the need for photographers, illustrators, and designers. While this democratizes access to image-making, it also leads to the deskilling of creative professions, concentrating economic power in the hands of AI developers and tech corporations.
Masoudi (2024) connects these economic shifts to the aesthetics of AI-generated realism, arguing that corporate AI models prioritize polished, high-resolution images because they align with commercial advertising needs. In contrast, the poor, low-resolution images of amateur creators—which historically played a role in political activism, documentary journalism, and subcultural expression—are systematically devalued. This reflects a broader political economy of AI-generated images, where clean, hyperrealistic AI images are profitable, while messy, imperfect, human-made images are marginalized.
Klink (2024) introduces the concept of generation loss as a cultural and economic metaphor for these transformations. Just as digital compression degrades image quality over repeated reproductions, AI-generated images contribute to the erosion of creative labor by replacing human artistry with automated production. This loss is not just technical—it is an erasure of artistic expertise, historical knowledge, and subversive aesthetic traditions.
Another key issue in the political economy of AI-generated images is data colonialism—the extraction of vast amounts of human-made visual content to train AI models, often without consent. Sartori (2024) examines this issue in the context of digital avatars and posthuman identity, arguing that AI-generated bodies are assembled from a vast archive of human expressions, gestures, and anatomical details, raising questions about ownership, agency, and digital embodiment.
4.3 AI-Generated Images and the Politics of Surveillance and Propaganda
Finally, AI-generated images raise critical concerns about surveillance, propaganda, and misinformation. The ability to generate photorealistic fake images at scale has enormous political implications, from deepfake technology to algorithmic propaganda.
Charlotte Klink (2024) explores the intersection of AI-generated images and migration discourse, arguing that AI images contribute to both the visualization and erasure of migrant identities. While AI-generated media can create empathetic visual narratives, it can also depersonalize and decontextualize the lived experiences of migrants, reducing complex realities to generic, emotionally-manipulative visual tropes.
Similarly, Krejs (2024) warns that AI-generated representations of housing and domestic life may be used to reinforce gentrification narratives, erasing the presence of working-class and marginalized communities. By generating an idealized, sanitized vision of urban life, AI-driven visuals serve as aesthetic tools for real estate speculation, urban redevelopment, and social exclusion.
Somaini (2024) also addresses the role of AI-generated images in digital propaganda, noting that the increasing reliance on AI-generated visuals in news media, marketing, and political communication blurs the line between fact and fiction. He argues that AI-generated realism is not neutral—it is a strategic tool that can be mobilized to shape political perception and public opinion.
These concerns are echoed by Schmutzer (2024), who critiques the cultural excitement around AI-generated art, warning that it may distract from the deeper political and ethical questions surrounding AI. He calls for a critical engagement with AI aesthetics that prioritizes political agency and ethical responsibility, rather than merely celebrating AI’s technological novelty.
Conclusion: Towards a Critical Aesthetic of AI-Generated Images
The socio-political and ethical implications of AI-generated images reveal the deep entanglement between technology, aesthetics, and power. While AI promises new forms of visual creativity, it also:
- Reinforces biases, marginalizing alternative visual cultures.
- Restructures the political economy of image production, leading to job displacement and the devaluation of human artistic labor.
- Expands the reach of digital propaganda and surveillance, making visual misinformation more pervasive and harder to detect.
Rather than accepting AI-generated images as neutral artifacts, scholars argue that we must critically engage with their implications, recognizing that AI aesthetics are not just about beauty or creativity, but about control, visibility, and representation.
In the next section, we will examine the historical continuities and ruptures in AI aesthetics, situating AI-generated images within a broader lineage of artistic and technological experimentation.
5. Historical Continuities and Ruptures in AI Aesthetics
While AI-generated images may seem like a radical break from previous artistic and technological traditions, they can also be understood within a longer history of computational, generative, and mechanical image-making. From early cybernetics to computer art, from the historical avant-garde to digital aesthetics, AI-generated images both extend and transform existing artistic and media practices.
At the Aesthetics of Digital Image Synthesis conference, scholars explored how AI aesthetics relate to past movements in visual culture, raising key questions:
- Is AI-generated art truly a break from historical traditions, or does it extend past artistic experiments?
- How do concepts from surrealism, Pop Art, and postmodernism shape AI aesthetics?
- What role do early computer artists and cybernetic theorists play in anticipating the current AI-generated image revolution?
By tracing the historical continuities and ruptures in AI aesthetics, we can better understand the deeper cultural and artistic logic that underpins these emerging forms of visual production.
5.1 AI Aesthetics and the Legacy of Surrealism and Pop Art
Several scholars at the conference drew comparisons between AI-generated images and historical avant-garde movements, particularly surrealism and Pop Art.
Florian Gucher (2024) argues that AI images share deep affinities with surrealist and pop aesthetics, particularly in their disruptions of representation and embrace of kitsch and pastiche. He notes that AI-generated images often resemble Max Ernst’s surrealist collages, in which disparate visual elements are cut, reassembled, and recombined in strange, dreamlike ways. However, unlike surrealist artists who deliberately engaged with the unconscious and irrational, AI-generated surrealism is statistical rather than psychological—it emerges from algorithmic recombination rather than spontaneous imagination.
Gucher also connects AI aesthetics to Pop Art, particularly its embrace of kitsch and consumer imagery. He points to Jeff Koons’ balloon animals and Warhol’s screen prints as precedents for AI’s hyper-polished, commercial aesthetic. However, he argues that while Pop Art knowingly played with mass culture, AI-generated images uncritically reproduce commercial aesthetics, lacking the critical distance or irony of their Pop predecessors.
Similarly, Lotte Philipsen (2024) critiques the aesthetic ideology of AI-generated images, arguing that they are deeply influenced by postmodernist theories of pastiche and simulation. She notes that many AI-generated images operate within a framework of aesthetic “naturalness”, reinforcing conventional visual styles rather than disrupting them. By analyzing AI aesthetics through the lens of postmodernism’s emphasis on surface, simulation, and intertextuality, Philipsen suggests that AI does not create new styles, but endlessly recycles existing ones.
5.2 The Role of Cybernetics and Early Computer Art
While AI-generated images are often framed as a purely 21st-century development, they are deeply rooted in the history of cybernetics, early computer art, and algorithmic aesthetics.
Aline Guillermet (2024) traces the origins of AI-generated art to post-war cybernetics and the early experiments in computer-generated imagery in the 1960s and 1970s. She highlights how early computer artists—such as Harold Cohen, Vera Molnár, and Frieder Nake—used algorithmic rule systems to generate aesthetic compositions, long before contemporary AI models. Cohen’s program AARON, for instance, could autonomously generate drawings, much like how modern AI systems create images from text prompts.
Guillermet argues that the key difference between early computer art and contemporary AI-generated images lies in the opacity of the process. Whereas early computer artists designed their own algorithms and could modify them, today’s AI models operate within black-box neural networks, making their creative decisions largely inscrutable to users. This shift represents a key rupture in the history of algorithmic aesthetics, as artists today engage with AI more as a tool than as a medium.
Antonio Somaini (2024) similarly emphasizes the historical link between AI-generated images and early cybernetic theories. He notes that AI image generation is best understood not as a replacement for traditional art, but as a continuation of long-standing debates about the role of automation in creativity. From the automated weaving looms of the 19th century to mid-20th-century algorithmic music composition, AI-generated aesthetics belong to a much older tradition of mechanical creativity.
5.3 AI, Latent Space, and the Cartographic Metaphor
One of the most novel aspects of AI-generated aesthetics is the concept of latent space—the vast mathematical representation of images within a model. Lorenzo Aimo (2024) and Somaini (2024) argue that this concept marks both a continuity and a rupture in visual culture.
- Continuity: Latent space can be seen as a continuation of cartographic traditions, where meaning is mapped, measured, and structured. Pasquinelli and Joler (2021) liken AI’s latent space to a nooscope, a tool for “navigating the space of knowledge.” This aligns with historical attempts to map meaning visually, from Renaissance perspective grids to modern data visualization techniques.
- Rupture: Unlike traditional maps, latent space is non-representational—it does not correspond to a real-world geography but instead exists as a statistical abstraction. This marks a fundamental break in how images are structured: whereas traditional art was bound to physical representation, AI-generated images emerge from an abstract mathematical space.
Somaini (2024) suggests that this shift challenges our entire conception of images as representational artifacts. Rather than capturing pre-existing visual realities, AI-generated images simulate plausible visual scenarios based on statistical relationships. This echoes Baudrillard’s (1981) concept of hyperreality, where simulations no longer refer to reality but generate their own self-contained worlds.
5.4 The Future of AI Aesthetics: Towards a New Visual Paradigm?
Given these historical continuities and ruptures, the question remains: Are AI-generated images creating a genuinely new visual paradigm, or are they merely the latest iteration of past aesthetic trends?
Birgit Mersmann (2024) introduces the concept of metacreativity to describe AI’s capacity for self-referential artistic production. She examines works like Refik Anadol’s Machine Hallucinations, which transform AI-generated dreamscapes into immersive installations, and argues that these projects represent a new phase in the history of generative aesthetics, one where machines are not just producing images but theorizing their own creative processes.
Meanwhile, Vladimir Alexeev (Merzmensch) (2024) sees AI as a threshold moment in artistic history, where human-machine collaboration will fundamentally redefine the creative act. He argues that the next stage of AI aesthetics will involve not just generating images but critically engaging with the mechanisms of AI itself, producing art that exposes, deconstructs, and interrogates the nature of machine creativity.
Conclusion: AI Aesthetics Between Innovation and Repetition
The historical analysis of AI-generated images reveals both continuities and ruptures:
- AI aesthetics draw from surrealism, Pop Art, and postmodernism, but often lack their critical reflexivity.
- AI-generated images continue the legacy of cybernetic and algorithmic art, but introduce black-box complexity that removes creative transparency.
- AI operates within a cartographic metaphor, but latent space marks a rupture in how images are structured and generated.
As scholars debate whether AI represents a new artistic paradigm or a repetition of past forms, one thing remains clear: AI-generated aesthetics are deeply embedded in longer histories of art, technology, and representation. Understanding these histories allows us to critically engage with AI’s role in reshaping visual culture in the 21st century.
Conclusion: AI-Generated Images at the Intersection of Meaning, Realism, Creativity, Politics, and History
The aesthetics of AI-generated images are not merely a technological novelty—they represent a transformative moment in visual culture. Across the discussions at the Aesthetics of Digital Image Synthesis conference, scholars examined how AI-generated images reconfigure meaning, disrupt realism, automate creativity, raise ethical and political questions, and continue or rupture historical traditions. Each of these dimensions reveals deep tensions at the heart of AI-generated aesthetics, forcing us to rethink long-held assumptions about what images are, how they function, and what roles they play in society.
Reconfiguring Meaning: From Representation to Simulation
One of the most striking insights from the conference was that AI-generated images fundamentally alter the way meaning is produced. Unlike traditional visual media, where meaning is tied to representation, authorship, and cultural context, AI-generated images operate through statistical inference and probabilistic modeling (Lorenzo, 2024; Somaini, 2024).
- Instead of capturing reality, AI-generated images simulate plausibility, aligning their surfaces with the expectations of human viewers rather than referring to an external world.
- The latent space of AI models functions as a cartographic system for visual probability, where meaning is mapped mathematically rather than constructed semiotically (Parikka, 2023; Pasquinelli & Joler, 2021).
- This shift challenges long-standing visual culture theories that assume an indexical or symbolic relationship between images and the world (Bahjor, 2023; Paglen & Crawford, 2021).
This transformation of meaning has epistemological consequences: AI-generated images do not operate as stable signifiers but as fluid, adaptive constructs shaped by algorithmic decision-making. This makes them malleable and unpredictable, reinforcing the need for new frameworks to analyze digital visuality.
The Crisis of Realism: From Indexicality to Proxy-Real
A second major theme was the destabilization of realism in AI-generated images. For centuries, visual realism has been associated with photographic indexicality—the idea that a photograph bears a direct, causal relationship to the real world (Jacob, 2024). However, AI-generated realism is synthetic, built from statistical models rather than material traces.
- Photographic truth is now just one style among many, as AI-generated images simulate photographic aesthetics without requiring a connection to real-world referents (Jacob, 2024; Purgar, 2024).
- The proxy-real has replaced documentary realism, meaning that AI images do not represent reality but rather stand in for it, influencing our perception of what is real (Purgar, 2024).
- This has profound implications for media literacy, misinformation, and visual authenticity, as AI-generated deepfakes blur the boundaries between fiction and fact (Somaini, 2024).
This shift calls into question the authority of images in an era where visual evidence can be easily fabricated, forcing us to rethink what constitutes visual truth in the digital age.
Automating Creativity: From Human Expression to Algorithmic Production
Another critical discussion at the conference revolved around the automation of artistic creation. AI-generated images have redefined the role of the artist, shifting from a model of human authorship to human-machine collaboration (Merzmensch, 2024; Guillermet, 2024).
- AI image generators function as hyper-efficient remix engines, producing endless variations of existing styles but struggling to generate truly new aesthetic forms (Cramer, 2024).
- The kaleidoscopic constraint of AI-generated images means that while they can recombine visual elements, they remain trapped in statistical predictability, lacking conceptual depth or true innovation (Cramer, 2024; Rozenberg, 2024).
- This raises concerns about the deskilling of creative labor, as AI-generated content threatens to undermine traditional artistic professions while concentrating economic power in tech corporations (Jacob, 2024).
At the same time, some scholars emphasized that AI can serve as a tool for creative augmentation, enabling new hybrid artistic forms that combine human intuition with machine learning (Gucher, 2024; Sartori, 2024). The challenge moving forward is to balance automation with artistic agency, ensuring that AI does not simply replace human creativity but expands it in meaningful ways.
The Political and Ethical Stakes: AI, Bias, and Power
The discussions also underscored the political and ethical dimensions of AI-generated images, revealing how power, bias, and economic exploitation are embedded in AI aesthetics.
- AI models disproportionately reflect Western, capitalist, and Eurocentric aesthetic norms, reinforcing global visual inequalities rather than democratizing image production (Krejs, 2024; Schober, 2024).
- The political economy of AI-generated images raises concerns about data colonialism, as AI models rely on vast datasets extracted without consent from human artists, photographers, and communities (Masoudi, 2024; Sartori, 2024).
- AI-generated realism is being weaponized for propaganda and misinformation, increasing the manipulative potential of digital media (Klink, 2024; Somaini, 2024).
These issues highlight the urgent need for critical AI literacy, ensuring that artists, researchers, and policymakers actively engage with AI’s ethical challenges rather than passively accepting its technological advances.
Historical Continuities and Ruptures: AI as the Next Phase in Computational Aesthetics
Finally, scholars at the conference debated whether AI-generated images represent a true artistic revolution or a continuation of earlier computational aesthetics. While AI introduces new modes of image production, its logic is deeply tied to historical precedents in cybernetics, surrealism, Pop Art, and algorithmic art (Guillermet, 2024; Philipsen, 2024).
- AI aesthetics draw from surrealist collage and Pop Art pastiche, but often lack the critical reflexivity of these earlier movements (Gucher, 2024).
- AI-generated art extends the legacy of early computer artists, such as Harold Cohen and Vera Molnár, but differs in that contemporary AI models are opaque, inaccessible, and controlled by corporate interests (Guillermet, 2024).
- The concept of latent space represents both a continuity with historical mapping traditions and a rupture in how images are structured, shifting from perspectival representation to mathematical abstraction (Somaini, 2024; Aimo, 2024).
These historical connections suggest that AI-generated images are not emerging in a vacuum—rather, they are deeply intertwined with broader artistic and technological histories.
Final Thoughts: Towards a Critical Aesthetics of AI-Generated Images
AI-generated images are not simply new tools for artistic production—they represent a fundamental shift in how we understand, create, and interpret images. As this conference demonstrated, the aesthetics of AI-generated images must be analyzed through multiple lenses:
- Theoretical (redefining meaning and representation)
- Aesthetic (disrupting realism and visual styles)
- Economic (reshaping creative labor and image markets)
- Political (entrenching bias and power asymmetries)
- Historical (continuing and disrupting past artistic traditions)
Moving forward, artists, researchers, and theorists must develop a critical aesthetics of AI-generated images—one that goes beyond technological fascination to interrogate the cultural, social, and political implications of this emerging visual paradigm. AI is not merely an artistic tool; it is a contested space of knowledge production, automation, and ideological struggle. How we engage with AI-generated images today will shape the future of visual culture for generations to come.
Bibliography
1. Conference Proceedings
- Aesthetics of Digital Image Synthesis Conference (2024, November 7-9). Klagenfurt, Austria. This conference focused on analyzing the aesthetics and various styles of representation produced by AI, critically examining the potential impact of technology on the art world, society, and our understanding of creativity. arthist.net
2. Journal Articles and Book Chapters
- Gucher, F. (2024). AI Images and the Legacy of Surrealism and Pop Art. In Proceedings of the Aesthetics of Digital Image Synthesis Conference. This paper explores the deep affinities between AI-generated images and historical avant-garde movements, particularly surrealism and Pop Art.
- Philipsen, L. (2024). Postmodernism and AI Aesthetics: The Ideology of Naturalness. In Proceedings of the Aesthetics of Digital Image Synthesis Conference. This study critiques the aesthetic ideology of AI-generated images, arguing that they reinforce conventional visual styles rather than disrupting them.
- Guillermet, A. (2024). From Cybernetics to AI: The Evolution of Generative Art. In Proceedings of the Aesthetics of Digital Image Synthesis Conference. This research traces the origins of AI-generated art to post-war cybernetics and early computer-generated imagery experiments.
- Somaini, A. (2024). Latent Spaces and the Cartography of AI-Generated Images. In Proceedings of the Aesthetics of Digital Image Synthesis Conference. This paper examines the concept of latent space in AI models, comparing it to traditional cartographic systems for visual probability.
- Krejs, B. (2024). Bias in AI-Generated Domestic Spaces: A Monocultural Aesthetic. In Proceedings of the Aesthetics of Digital Image Synthesis Conference. This study analyzes how AI-generated depictions of domestic spaces prioritize Western, upper-middle-class aesthetics, reinforcing a monocultural vision of “home.”
- Masoudi, M. (2024). AI-Generated Realism vs. Digital Subcultures: The Erasure of Imperfection. In Proceedings of the Aesthetics of Digital Image Synthesis Conference. This research investigates the tension between high-resolution AI-generated realism and the “bastard poor images” of digital subcultures.
- Schober, A. (2024). Visual Tropes in AI-Generated Portraits: Reinforcing Normativity. In Proceedings of the Aesthetics of Digital Image Synthesis Conference. This paper critiques patterns of audience-address in AI-generated images, noting their conformity to commercialized visual tropes.
- Jacob, P. (2024). Automation and the Political Economy of AI-Generated Images. In Proceedings of the Aesthetics of Digital Image Synthesis Conference. This study situates AI-generated images within a larger history of automation under late capitalism, discussing their impact on creative industries.
- Klink, C. (2024). AI-Generated Images and Migration Discourse: Visualization and Erasure. In Proceedings of the Aesthetics of Digital Image Synthesis Conference. This research explores the intersection of AI-generated images and migration discourse, analyzing their role in both visualizing and erasing migrant identities.
- Sartori, D. (2024). Data Colonialism and Digital Embodiment in AI-Generated Avatars. In Proceedings of the Aesthetics of Digital Image Synthesis Conference. This paper examines the extraction of human-made visual content to train AI models, raising questions about ownership, agency, and digital embodiment.
- Mersmann, B. (2024). Metacreativity: AI and the Emergence of New Artistic Subjects. In Proceedings of the Aesthetics of Digital Image Synthesis Conference. This study introduces the concept of metacreativity to describe AI’s capacity for self-referential artistic production.
- Alexeev, V. (2024). Human-Machine Collaboration: Redefining the Creative Act in the Age of AI. In Proceedings of the Aesthetics of Digital Image Synthesis Conference. This research discusses how human-machine collaboration will fundamentally redefine the creative act, emphasizing the need for critical engagement with AI’s mechanisms.
3. Related Bibliography
- Achlioptas, P., Ovsjanikov, M., Haydarov, K., Elhoseiny, M., & Guibas, L. (2021). ArtEmis: Affective Language for Visual Art. arXiv preprint arXiv:2101.07396. This paper introduces ArtEmis, a large-scale dataset paired with machine learning models to predict emotional responses to art.
- Cetinic, E., & She, J. (2022). Understanding and Creating Art with AI: Review and Outlook. ACM Transactions on Multimedia Computing, Communications, and Applications, 18(2), 1-21. This comprehensive review explores how AI is used to analyze and create art, providing new perspectives on the development of artistic styles.
- Manovich, L. (2024). The Aesthetics of AI: Digital Image Synthesis and Its Discontents. In Proceedings of the Aesthetics of Digital Image Synthesis Conference. This keynote address discusses how AI-generated images are capable of producing something ‘genuinely new,’ yet remain images made of images without direct reference to reality.
- Schmidhuber, J. (1997). Low-Complexity Art. Leonardo, 30(2), 97-103. This paper presents an algorithmic theory of beauty, suggesting that the most aesthetically pleasing images are those that can be encoded by the shortest description.
- van der Nagel, E. (2020). Verifying Images: Deepfakes, Control, and Consent. Porn Studies, 7(4), 427-431. This article discusses the ethical implications of deepfakes, particularly in relation to consent and image verification.