Environmental Impact of AI
Every AI query has a measurable environmental cost in terms of electricity, carbon emissions, and water consumption. As professionals, understanding these costs is the first step towards Green AI practices.
1. The Hidden Costs: Water and Carbon
Water Consumption (Cooling)
Data centres use evaporative cooling to manage the intense heat generated by AI hardware.
- Typical Rate: 1–3 litres per kWh of computation.
- Per Query: A complex AI query (~0.005 kWh) consumes approximately 10–50 mL of water.
- The Issue: This water is evaporated and lost to the local ecosystem, often in water-stressed regions.
Carbon Footprint (Energy Mix)
The carbon cost depends entirely on where the servers are located and the local energy grid.
- Low Impact: Iceland/Norway (~100% renewable) → 0.05g – 0.1g CO₂ per query.
- High Impact: US/Australia/Poland (~60-75% fossil fuels) → 2g – 3.5g CO₂ per query.
- Implication: The same query can have a 70× difference in carbon impact based on provider location.
2. Putting it in Context
| Activity | Environmental Impact | Equivalent AI Usage |
|---|---|---|
| 5-minute shower | 40 litres water | 4,000 queries |
| 1 km driving (petrol car) | 150g CO₂ | 60 queries |
| Manufacturing a laptop | 200kg CO₂ | 80,000 queries |
| 1,000 words (GPT-4) | ~4.32g CO₂e | 1 smartphone charge |
3. Scaling Effects: Why Efficiency Matters
AI's environmental impact is growing exponentially due to three compounding factors:
- Model Size: Larger models (e.g., GPT-4 vs GPT-3) require significantly more energy per token.
- User Growth: Hundreds of millions of new users adopting AI monthly.
- Frequency: Users are integrating AI deeper into daily workflows, increasing queries per person.
The Result: A projected 50× increase in AI-related energy consumption between 2023 and 2025.
4. Professional Green AI Principles
To minimise AmaDema's footprint, follow these high-level principles:
- The One-Shot Goal: Use frameworks (AUTOMAT/CO-STAR) to get the right answer on the first attempt. Reducing iterations from 5 down to 1 saves 80% of the energy cost.
- Right-Sizing: Don't use a massive model (GPT-4) for a task a small model (Llama 8B) can handle. Small local models can be 100–1000× more efficient for routine tasks.
- Batching and Caching: Process multiple related items in one query and save results to avoid redundant generations.
- Value Check: Before prompting, ask: "Is the value of this insight worth the 50mL of water and 3g of CO₂ it costs?"
Next: Optimisation Strategies →