Overview of digital service impact assessment tools
Rédigé le 29/01/2024 par Youen Chéné
How to choose the right tool to evaluate the impact of a digital service?
This article comes from different conversations with people from different communities (developers, ISD architects, CSR, members of communities like Boavizta, Ethical Designers, Green IT). I realized that quite a few people are lost in this emerging ecosystem and are using tools to measure something not originally planned. To make the metaphor, it’s like using a thermometer to measure humidity, it doesn’t work.
If one approach is to say that this at least allows us to evaluate progress. The fault is that this generates rejection by populations of digital service developers, and therefore by some of the people who create them. This is evidenced by the lack of responsible digital topics in conferences like Devoxx while the equivalents in the provinces highlight them (Breizhcamp, BDX.IO, Sunny Tech, etc.).
To try to remedy this, during the day of digital eco-conception on February 1 and Flowcon on March 6, 2024, with another member of Boavizta, Julien Rouzé of https:// sopht.com/[Sopht], we are going to present the subject in a participatory workshop to help people better evaluate their current context instead of taking the first tool that comes along.
This article contains the introduction to the workshop, this will already help a lot of people to find their way around.
Overview of types of measures and digital impact assessment models
The things to remember are to clearly differentiate the types of digital services that can be evaluated depending on the tool:
-
Web content (showcase, blog, e-commerce),
-
Web Application, Mobile Application, Desktop Application
-
All types
Note that the machine learning/AI part does not yet appear. Work is underway in Boavizta on the subject.
The material impact databases (Behind the Boavizta, Resilio or NegaOctet API) do not appear either but can be used in Ecolab or e-footprint for scope 3 assessment.
Tool/Model | Type | Category 1 | Category 2 | Digital service type | Result type | Opensource | Country of origin |
---|---|---|---|---|---|---|---|
Measure |
Terminal |
Application |
CO2 eq |
No |
France |
||
Measure |
Servers |
All |
eq CO2 |
Yes |
France |
||
Measure |
Code |
Application |
Impact scale |
Yes |
France |
||
Modelization |
Performance |
Web Content |
Index |
No (study) Yes (Calculation) |
France |
||
Modelization |
Performance |
Web Content |
Index |
Yes |
US |
||
Modelization |
CO2 |
Bandwidth |
Web of Content |
CO2 eq |
Yes |
UK/US |
|
Modelization |
CO2 |
Bandwidth |
Web of Content |
eq CO2 |
Yes |
France |
|
WebsiteCarbon V2 |
Modelization |
CO2 |
Bandwidth |
Web of Content |
CO2 eq |
Yes |
UK/US |
Fruggr, Digital Beacon, Greenoco, Greenmetrics, |
Modelization |
CO2 |
Bandwidth |
Web of Content |
CO2 eq |
No |
France, UK. |
Modelization |
CO2 |
Server Plan |
All |
CO2 eq |
Yes |
France |
|
Modelization |
CO2 |
Server Plan |
All |
CO2 eq |
Yes |
France |
|
Modelization |
CO2 |
Software Architecture |
Application |
eq CO2 |
Yes |
France |
On the content web part, you will find more details on our annual state of the art.
What about the validity of evaluation tools and models?
At the beginning of the article, we discussed the scientific bases of measurement tools and models.
One way to do this is to look at the number of parameters, criteria used upstream to make the model.
If this information is more or less valid depending on the complexity of the digital service, it allows us to get an initial idea.
Tool/Model | Type | Number of parameters | Scientific basis |
---|---|---|---|
Greenspector |
Measure |
N/A |
Wattmeter measurement |
Scaphandre |
Measure |
N/A |
RAPL |
EcoCode |
Measure |
N/A |
Open data |
EcoIndex |
Modelization |
3 |
Private study |
Lighthouse |
Modelization |
> 10 |
Open data |
CO2.js (WebsiteCarbon, EcoGradr) |
Modelization |
1 |
Open data, Faible |
OneByte Model (LSP) |
Modelization |
1 |
Open data, Faible |
WebsiteCarbon V2 |
Modelization |
1 |
Open data, Faible |
Fruggr |
Modelization |
1 to 3 |
Private |
NumEcoEval |
Modelization |
5 à 10 |
Open data |
GreenFrame |
Modelization |
5 à 10 |
Open data |
eFootPrint |
Modelization |
> 20 |
Open data |
Conclusion
Don’t jump headlong into a measuring tool. Ask yourself the right questions before:
-
What do you want to evaluate?
-
What type of digital service?
-
In what infrastructure context?
A small point of attention is that in the case of hyper-shared and hyper-scalable architectures like Youtube, Twitch, Gmail, Netflix, ChatGPT, not all of these models can be used. Their architecture is so adapted to their specific needs, their server plan secret and constantly evolving that in this context, it is only tailor-made.
PS: the call to action at the bottom of this article only concerns content sites (showcase, blog, ecommerce).