“AI will reach singularity by 2030.” This bold claim by OpenAI CEO Sam Altman illustrates the immense hype surrounding artificial intelligence (AI). Indeed, modern AI systems are seen as beacons of hope in nearly every field. In medicine, they are expected to detect diseases earlier and help develop new treatments. In the fight against climate change, they promise to optimize the energy sector and mobility. Education could also be transformed by personalized learning systems. And when it comes to climate adaptation, AI is increasingly used to predict extreme weather events, map risks, and help cities adjust their infrastructure to heatwaves. In short: AI is celebrated as a miracle cure for solving humanity’s most pressing problems.
But this euphoric narrative often overshadows a more fundamental issue: While much attention is paid to AI’s energy and resource consumption, a more crucial question remains—can AI actually create innovation, or is it merely a tool for reassembling existing knowledge? Today’s systems like GPT-4 are impressive in producing fluent texts, but they draw entirely on pre-existing data. True creative breakthroughs, scientific paradigm shifts, or disruptive ideas still originate from human intuition, curiosity, and experience.
That’s not to ignore the environmental impact: Training large models consumes vast amounts of energy and leaves behind a significant carbon footprint—training GPT-3 alone emitted around 550 tons of CO₂. And that’s without accounting for the environmental toll of the required hardware: Specialized chips and servers demand enormous resources and materials—often at the expense of ecosystems and communities in mining regions. While some companies are working on more efficient algorithms and sustainable infrastructure, reality still lags behind these ambitions. Without clear ecological guidelines, technological progress risks continuing at the planet’s expense.
The glowing vision of AI as a universal savior begins to look less like a revolutionary shift and more like a reflection of our existing mindset—only faster.
When AI Becomes a Technological Miracle Cure
Faced with these contradictions, critical questions arise: Is AI really the solution to all our modern problems—or are we simply creating new ones? What ecological and social side effects accompany the AI boom? And who decides how an AI should solve a problem? One thing is certain: An AI always acts according to the goals and priorities we set for it.
Consider this thought experiment: What if an AI were tasked with achieving climate goals at all costs? In extreme cases, it might weigh human suffering against emission reductions. Whether it prioritizes individual well-being or the collective good depends on the values it has been trained with. The question of how AI “thinks” is thus closely tied to societal values, ethical boundaries, and—most importantly—who defines them.
A real-world example from the U.S. highlights how societal biases can creep into AI systems: Facial recognition software used by U.S. police forces misidentified several innocent Black men as suspects. The problem wasn’t just technical—it stemmed from biased training data and decisions that reproduced systemic inequalities. Algorithms were significantly less accurate at identifying non-white faces, especially Black individuals. Studies showed that the error rate for Black women reached up to 35%, compared to under 1% for white men (MIT Media Lab, 2018).
Different priorities and blind spots can lead to radically different “solutions”—from technological efficiency to social redistribution. What’s considered “right” ultimately depends on the worldview and objectives behind a system. AI is therefore not a neutral all-rounder but a tool shaped by human assumptions. It cannot think—it merely detects and reproduces patterns from the data it was trained on. As such, it cannot develop its own moral framework or make ethical decisions. These too must be programmed by us. Ethical trade-offs—so essential in many societal debates—are beyond AI’s reach unless explicitly defined by humans.
So who really benefits from this AI revolution—and who is left behind? Right now, it’s primarily large tech companies that control the necessary data and computing power, concentrating influence in the hands of a few. Experts warn that AI development risks becoming monopolized, potentially stifling innovation and increasing global dependency (Yale Law & Policy Review, 2024). At the same time, millions of jobs are at risk. A study by Goldman Sachs estimates that up to 300 million jobs worldwide could be automated or altered by generative AI (industry intelligence inc., 2025). Administrative, legal, and routine-heavy office work is especially vulnerable. New professions are emerging—like prompt engineering—but the social transformations will be far-reaching. However, this shift is occurring more slowly than anticipated a decade ago (OECD, 2023), making long-term predictions difficult, both technologically and socially.
And the core issue remains: Can AI truly innovate—or is it just remixing what already exists? Many experts argue that today’s models, such as GPT-4, are highly capable pattern recognition machines (vgl. Marcus & Davis, “Rebooting AI”, 2019). They generate outputs by recombining known information—but without any actual understanding. What’s missing is context, a sense of purpose, or even curiosity.
Genuine innovation—like the theory of relativity or quantum mechanics—requires paradigm shifts, interdisciplinary thinking, and the courage to challenge established ideas. These abilities are deeply rooted in human cognition and remain impossible to replicate algorithmically. At best, AI can serve as a catalyst or assistive tool—generating hypotheses or speeding up workflows—but not as a true source of disruptive breakthroughs. It remains dependent on human-defined goals, data, and evaluation metrics.
AI Only Works as a Tool
How can AI be expected to solve humanity’s grand challenges if it can’t truly innovate? In many ways, AI risks exacerbating the very inequalities it claims to fix. The climate crisis, for example, is not just an ecological emergency but also a social one. Studies show that wealthier populations contribute far more to global emissions, while poorer communities bear the brunt—facing heatwaves, food insecurity, or lack of adaptive infrastructure (The Guardian, 2025). If AI is primarily deployed for economic optimization and industrial efficiency, these imbalances risk being reinforced. New challenges may emerge—without solving the old ones.
If we accept that AI is not the solution, we must ask: How can it be part of one? The key lies in understanding AI as a tool. And like any tool, it depends on how and for what it is used. Responsible governance is essential—grounded in ethical reflection, fair policies, and the political will to shape AI for the common good.
In practical terms, developers, companies, and institutions have a responsibility to use AI consciously—for instance, to detect climate risks early, promote equitable resource use, or democratize access to knowledge. Even on the technical level, there are levers for change: Developers who engage with the principles of green coding can already contribute to sustainability in small but meaningful ways. AI itself can be used deliberately to make one’s code more sustainable—how exactly that works will be the focus of the next article in this series. For a general introduction to the topic, see our article „Green coding - sustainability in software development“, and for practical approaches, refer to „Green Coding Patterns - Practical Tips for Energy-Efficient Software“. This way, AI becomes not an end in itself but a building block of a more sustainable digital future. It’s not the technology itself that makes the difference—but our decisions about how we use it.
In the end, AI is not a magic fix. It’s a component of a more sustainable digital future. What makes the difference is not the technology itself—but our choices in how we use it.