Skip to Content

Green software engineering – Back to the roots!

Thilo Hermann
27 Jun 2022

Sustainability is one of the hottest topics in IT currently. It’s obvious that software engineering also has an impact on our environment. The effect of mitigating this impact might not be as big as that of optimizing the steel industry, but there is still value in taking a closer look.

Green software engineering is an emerging discipline with principles and competencies to define, develop, and run sustainable software applications. The result of green software engineering will be green and more sustainable applications.

Green applications are typically cheaper to run, more performant, and more optimized – but that’s just a welcome addition, and I will explain the reason for this correlation later. The key thing is that developing applications in such a manner will have a positive impact on the planet. So, let’s have a closer look at the principles. Please note that green software engineering is just one part of sustainability in IT, but in this blog, I will focus on it!

According to, the following principles are essential when building green applications:

  1. Carbon: Build applications that are carbon efficient.
  2. Electricity: Build applications that are energy efficient.
  3. Carbon Intensity: Consume electricity with the lowest carbon intensity.
  4. Embodied Carbon: Build applications that are hardware efficient.
  5. Energy Proportionality: Maximize the energy efficiency of hardware.
  6. Networking: Reduce the amount of data and distance it must travel across the network.
  7. Demand Shaping: Build carbon-aware applications.
  8. Measurement & Optimization: Focus on step-by-step optimizations that increase the overall carbon efficiency.

You should take code and architectural changes into account that reduce the carbon emissions and energy consumption produced by your application. Please note that most of the examples are based on Java and cloud technologies like containers.

Just another NFR?!

When I read these principles, it came to my mind that they should be reflected in non-functional requirements (NFRs) for the application. If you treat them like this, it’s obvious that you must find the right balance, and typically there is a price tag attached.

The good news is that green principles are also related to well-known non-functional requirements like “performance efficiency” (see ISO 25010

Those NFRs regarding performance and efficiency can typically be fulfilled by optimizing your code. We often have challenging performance requirements, and once you optimize your algorithms (e.g., moving from a “bubblesort” with the complexity of O(n^2) to a “quicksort” with the complexity of O(n*log(n)) you will reduce the CPU utilization, especially for huge data sets. Those optimizations also have a positive effect on energy consumption and thus for the CO2 emissions.

On the other hand, a brute force solution by keeping the inefficient algorithms and just adding additional hardware (e.g., CPU, RAM) might work for the performance NFR, but not for efficiency, and thus this “lazy” approach will have a negative impact on your sustainability targets! Especially in the cloud with “unlimited” scaling this solution could be tempting for developers.

You might remember your computer science lectures around “complexity theory” and especially about “big O notation,” and you might have wondered what those are good for … now you know that those are key for a sustainable world! A green software engineer must be a master in implementing highly efficient algorithms!

You should be aware that the NFRs are sometimes conflicting, and you must make compromises. It’s a best practice to document your design decisions and their impact on NFRs. With this, it’s easy to visualize the impact and conflicts. Once you know those it’s the right time to make decisions!

One question still to be answered: when and how to optimize your green application?

Let’s start with two quotes from pioneers of computer science:

First rule of optimization: don’t do it.
Second rule of program optimization (for experts only!): Don’t do it yet –­­ that is, not until you have a perfectly clear and unoptimized solution. — Michael Jackson

Programmers waste enormous amounts of time thinking about, or worrying about, the speed of noncritical parts of their programs, and these attempts at efficiency actually have a strong negative impact when debugging and maintenance are considered. We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil. Yet we should not pass up our opportunities in that critical 3%. — Donald Knuth

To rephrase it: “know your enemy” and optimize according to the following algorithm.

  1. define your target
  2. measure as accurately as possible
  3. identify the bottlenecks
  4. optimize the biggest bottleneck (and only one at a time!)
  5. measure again
  6. check if you reached your target:
    a. Yes – you’re done.
    b. No – go back to step 2.

With this approach, you should be able to avoid micro-optimization and premature optimization which is the “root of all evil” (see

To move on: it’s not only the next NFR, but rather a mindset change to strive for resource efficiency wherever appropriate.

Abstraction as environment killer?

Since I started my career in computer science, the level of abstraction has grown over time. In the beginning I was still learning machine/assembly language (for the famous 6502 microprocessor), moved on to C/C++, later to Java, and finally to low code (e.g., Mendix, Outsystem, Microsoft PowerApps). This made life as a programmer much easier and more efficient, but the drawback is that with all those abstractions the actual resource usage is hidden and typically went up.

The obvious conclusion of this is that we should move back to machine language to build highly efficient green applications. “Unfortunately,” this is an aberration for several reasons:

  • It’s really tough and time consuming to implement in machine/assembly languages
  • Complex, huge systems are tricky to implement, even with a higher level of abstraction
  • Cost most probably will explode
  • Time to market will be much longer

On the other hand, compilers have improved a lot in the last years, and they often optimize the code better than an average programmer is able to. Techniques like loop optimization, inline expansion, and dead store elimination, to name a few, will improve the efficiency and thus lead to a greener application.

In some cases, it might be worth choosing a lower level of abstraction (e.g., use C) to optimize to the max, but this must be an explicit decision for the really heavily used code parts. As already shown above, you need to “know your enemy” and only optimize according to this pattern where you expect a huge impact!

Impact of chosen programming language

As mentioned, the chosen programming language can have a major impact on energy consumption (see As expected, once more C seems to be the benchmark in this area. The match “compiler vs. interpreter” is definitely won by the compilers. The optimizations done on the Java JVM seem to also have a positive impact on energy consumption and performance.

Bad news for all scripting languages, as they are at the lower end of the “ranking” for power usage. If you’re using JavaScript or even TypeScript heavily for computation tasks, you should look for better options. As always, this might change over time as interpreters can get optimized, as we saw for Java in the past. Techniques such as just-in-time compilers (JIT) and Java Hotspot JIT compilers had a major impact for Java, and this most probably could also be done for JavaScript and TypeScript.

As Python is for a lot of topics (incl. AI) the language of choice, I would encourage the experts to optimize it for energy consumption. For the time being, Python is the second to last and only PERL is worse. The energy footprint is approximately 75 times higher as for C, and thus there should be a lot of potential for optimizations.

Parallelization – Use your cores!

Modern hardware architectures with multiple cores on one chip are utilized best by parallelized algorithms. So once more it helps to know the theory and best practices in this area. As parallel computing is an old topic, you should watch out for parallelized algorithms for a given problem.

To be honest, we nowadays have so many abstraction layers between the actual CPU and the code we’re running on them that the impact is really hard to measure and to calculate, and thus I wouldn’t invest too much of my time in parallelization of my algorithms as this is a really tricky and error prone task!

Complex frameworks/COTS vs. lightweight alternatives

Besides abstraction, the usage of frameworks has an impact on the green features of your application. Middleware COTS products like application servers (e.g., IBM Websphere, Oracle WebLogic, JBoss, …) introduce complexity which might not be needed in every case. Lightweight alternatives like Tomcat or even further Spring Boot (see, Micronaut (see, or Quarkus (see will reduce the memory footprint, startup time, and the CPU usage by a lot. So, once more, you should check if you really need the complex and feature-rich frameworks or if the lightweight alternatives are good enough. For most of the cases I was involved with, the lightweight alternatives were perfectly fine!

I strongly recommend selecting the “lightest” framework that still fits the requirements. Don’t stick to given or even mandatory standards and be willing to fight for a greener alternative!

Architecture – Microservices vs. monoliths

Architecture can have a huge impact. Over the years architectural pattern evolved, and currently microservices are an often-used pattern. When you compare legacy monoliths with a modern microservice architecture, there are positive and negative effects on the resource usage:


  • Scaling: With a microservice architecture it’s easily possible to scale only where needed. Mechanisms like auto-scaling on container platforms (e.g., Kubernetes, OpenShift, AWS EKS, MS Azure AKS) do this only on demand and thus reduce the overall resource usage.
  • Best technology: You can choose the best fitting technology (e.g., databases, programming languages) for your purpose, thus reducing the complexity and level of abstraction for every microservice independently. This will lead to a more efficient application if done in a proper manner.


  • Network traffic: Within a microservice architecture the number of “external” calls is much higher than in a monolithic application. Even with lightweight protocols this introduces an overhead, and thus the resource usage will go up. If there is an API management tool involved, the overhead is even bigger.
  • Data replication: It’s quite common to replicate data in a microservice architecture to enable independency between the services. This will lead to a higher storage demand and the propagating of changes via events (e.g., event sourcing, command query responsibility segregation (CQRS)) will increase the network traffic and CPU utilization.

You need a case-by-case evaluation to determine if the chosen architecture has a positive effect on the green principles or not!

… and now to something completely different: no blog is complete without” – referring to the KISS principle (see

KISS – Reduce to the max

For all green software developers, the KISS principle shall be applied on the following dimensions: CPU – RAM – DISK – NETWORK

Now let’s have a look at what are typical measures to achieve those reductions:

  • Efficient algorithms: That’s obvious and already explained in the beginning of this blog. The better the algorithm works (e.g., CPU utilization, memory consumption), the faster you will reach your target. The reuse of existing libraries that include optimized solutions for common problems is a best practice.
  • Caching: Caches can reduce the amount of external service calls to a minimum. This includes database, file systems, external services, and web-content calls. Be aware that when introducing caches, you must make sure that the functional requirements are still fulfilled. You might get eventual consistency as drawback.
  • Compression: Data compression can be applied on several dimensions. Communication protocols shall be optimized in respect to size. For example, moving from SOAP to REST will already reduce the size and the marshalling and un-marshalling effort. You can even go further and use binary formats (e.g., like gRCP, ActiveJ). The drawback of binary protocols is that they are not easily human readable and some lack interoperability with other programming languages. Besides protocols, you should also check if you can reduce the resolution of graphics used within your (web) application, or better still move to textual representations. Tree shaking is another approach which is often used in JavaScript to reduce the amount of code transferred/compiled in the web browser. In general, if you compress and decompress during runtime, you need to calculate this overhead and check if it’s a real efficiency improvement.
  • Scream tests: The concept of a scream test is quite simple – remove the application/service and wait for the screams. If someone screams, put it back online. This helps to get rid of unused applications/services. If you are lucky, no one screams and thus this will reduce resource consumption.

Recurring fully automated tests

With the rise of automated testing in CI/CD pipelines as standard for engagements, the electricity consumption went up. If you take DevOps seriously, you need to have highly automated tests with a high test coverage that are executed frequently. This should include functional and non-functional tests.

Should we get rid of those tests for the sake of sustainability?

For sure not, but we might need to optimize our testing strategy. Instead of running all tests, we should rather focus on running the relevant ones. Thus, you need to know the dependencies and the impact of the changes you made. The existing tools support you in this, so you should avoid the “brute-force” approach of testing everything after every build.

… and now for something completely different – AI?!

Machine learning and AI are interesting topics. The training of a neural network model can emit as much carbon as five cars in their lifetimes. And the amount of computational power required to run large AI training models has been increasing exponentially in the last years, with a 3.4-month doubling time (see On the other hand, you might save a lot of carbon and/or electricity by using those models. It’s in some ways an investment, and one needs to calculate a “business case” to make the right decision. This seems to fall into the well-known “it depends” category of the consulting business.

It’s obvious that we need optimized algorithms in the future in this area (i.e., training of neural networks and also using them), and the research on this is at its starting point.

Finally, money!

In a service business, money is always important. The old rule that “each non-functional requirement will cost you something” is also true for those imposed by sustainability. You can spend money only once, and thus you need to make a decision if it’s for an additional feature or optimizing the footprint. This discussion must be carried out with your client, and you might learn how important this topic is! Please note that an efficient, green application will save money during the lifecycle by reducing the run-time costs. In the long tail, the payback of the investment might lead to a positive business case.

Know the basics, learn from the past…

Green software engineering also reminds me of the typical waves we have in IT and adds another wave: efficient application vs. efficient development.

Finally, I would state the following: every experienced software engineer with a strong focus on performance and efficiency is well equipped to become a green software engineer!

And sometimes the best solution for the environment would be to get rid of the application completely to save the planet! For whatever reason, Bitcoin comes to my mind while I’m writing this ;-)