White Paper

AI under scrutiny: environmental and creative demons or a mirror of our choices?

Redazione·Last update on January 26, 2026

Is AI truly a threat to the planet and art? Discover the data on energy consumption, water footprint, and cognitive biases.

AWorldAI under scrutiny: environmental and creative demons or a mirror of our choices?

Table of contents:

  1. From Demonization to Data
  2. The hidden side of the cloud: AI consumption and water footprint
  3. Creativity under fire: human bias against algorithmic art
  4. AI as a shared responsibility

From Demonization to Data

Artificial Intelligence (AI) is often described in extreme terms: on one hand, as a machine that devours energy and water; on the other, as a threat to the authenticity of human creativity.

These criticisms contain elements of truth, but scientific evidence suggests a more nuanced picture: the impact of AI is not an “intrinsic” flaw of the technology, but rather the result of human choices regarding where we build data centers, the energy we use, the efficiency of the systems, and how we structure culture and usage rules.

To address this issue responsibly, we must move from panic to plans: starting with credible data, recognizing uncertainties, and focusing on concrete levers for mitigation.

The hidden side of the cloud: AI consumption and water footprint

The training and use of AI models, especially the largest ones, require significant computing power. This translates into substantial energy consumption and, both indirectly and directly, the use of water to produce and dissipate the heat generated.

A. Energy consumption: a growing share, not (yet) out of control

Recent analyses show that AI-related workloads represent a growing share of the energy consumed by data centers. Some prospective scenarios indicate that if AI demand continues to grow rapidly without specific optimizations, these workloads could account for a significant fraction—estimated in some cases to be around one-fifth—of total data center consumption in certain regions or configurations.

The key point is not to focus on a single percentage but to recognize the trajectory: without efficiency and targeted policies, AI could become one of the main drivers of digital consumption growth.

B. Water footprint: an impact that is often local

Beyond energy, data centers require large amounts of water, especially where evaporative cooling systems are used. Various technical analyses indicate that, on average, for every kWh consumed by a data center, up to about 2 liters of water may be needed, considering both direct cooling and the water implicit in electricity production.

This figure is an indicative value that can vary significantly depending on technology, climate, and the energy mix. The most critical impact is not just global, but local: data centers located in water-stressed areas can contribute to tensions over water use, with significant social and environmental implications.

C. AI Sustainability: it depends on choices, not fate

Literature converges on one point: AI can have a significant environmental footprint, but this footprint is largely adjustable through infrastructural and design choices.

The main levers include:

  • powering data centers with increasing shares of renewable energy,
  • improving cooling efficiency (e.g., liquid systems, heat recovery, water recirculation),
  • designing more efficient models—even smaller, specialized ones—instead of focusing solely on increasingly large architectures.

Creativity under fire: human bias against algorithmic art

On the cultural front, generative AI is at the center of a heated debate: is it truly capable of “creating” or is it merely imitating? And, most importantly, how do people react to works generated by algorithms?

Experimental psychology and neuroaesthetics offer some interesting answers: systematic biases exist in how we perceive art depending on who we believe the author is.

A. The label bias: humans “win” when the image is the same

Several studies show that when the same visual work is presented with different labels (“created by a human artist” vs. “generated by AI”), people tend to evaluate it more positively—in terms of creativity, depth, or value—when they believe it is the work of a human.

These experiments indicate an implicit bias: many associate the idea of authentic creativity, effort, and intention only with human action, while algorithmic origin is perceived as less “real,” even when the visual result is identical.

In other words, we don’t just reject the image, but the fact that there is no biography, struggle, or human experience behind it.

B. When origin is hidden (and when quality changes everything)

The picture, however, is not monolithic. Other studies highlight that in “blind” contexts where the origin of the works is not revealed, many people strongly appreciate—and sometimes prefer—AI-generated works over human ones, especially in certain styles or genres.

Furthermore, aesthetic quality matters significantly: works with high visual or conceptual impact can receive positive judgments even after the algorithmic origin is revealed, while still keeping the discussion on authenticity and artistic value alive.

Overall, research suggests that the bias against AI art is real but dynamic: it depends on the label, the context, the quality of the work, and cultural evolution over time.

C. AI as a new medium, not just a substitute

From a historical perspective, many innovations have gone through similar phases of rejection: photography was accused of “killing” painting, and synthesizers were seen as the end of authentic music. Today, they are integral parts of creative ecosystems.

In this light, AI can be seen as a new medium: a tool that amplifies the ability to explore ideas, generate variations, and prototype at lightning speed. In the hands of artists, it can become an extension of the creative process, not necessarily a replacement.

This does not eliminate ethical and legal hurdles: unresolved issues remain regarding dataset transparency, copyright, consent, and compensation for human artists whose works were used to train models. These aspects require regulation, industry agreements, and new governance practices, not just a change in perception.

AI as a shared responsibility

Environmental and creative criticisms of AI are important: they highlight real risks that should not be downplayed. But data and research invite us to shift the focus:

  • The problem is not simply that “AI exists,” but how it is designed, powered, and used.
  • The environmental impact can be significantly reduced through more responsible infrastructure and design choices.
  • Our judgments on AI creativity reflect deep-seated biases linked to the idea of human authenticity, which may evolve but must be addressed with transparency, clear rules, and respect for artists’ rights.

For AWorld, the message is clear: AI doesn’t need to be demonized; it needs to be made accountable. It is not about stopping progress, but about steering it toward sustainability, equity, and human well-being, always starting from real data and an open dialogue on trade-offs.

Change is in our hands

AWorld supports your journey toward sustainability and well-being, turning your stakeholders into true agents of change.

Contact us