Product

With the design principles established and roles reorganised, the work shifted to translating what we had learned into the product itself. Every implementation decision was run through the same filter: does this serve transparency, accessibility, explaining the output, or keeping users in control?


What we implemented from round one and why

Our visual design assumption going in was clean, simple and colour-guided, informed by benchmarking competitors. It worked: 85% liked the design, several called it professional, and the colour-guided experience was something users responded to without being prompted, which gave us the confidence to carry it into the prototype rather than rethink it, as you will see.

The feedback from round one split clearly: visual design held up, messaging didn't. Over half of users didn't know what DPE was before starting. Without understanding the core concept, they couldn't trust anything the tool told them. We added a plain-language explanation with the full A to G scale to the landing page, a direct transparency fix. The "not sure? check your…" tooltips came from the same place: if users don't know where to find the information we're asking for, we're not being transparent nor following the design principle of accessibility.

Accessibility drove the smaller but equally important changes. Two users had to scroll to the top manually between every step. 58% didn't know what was happening during calculation and assumed it had broken. Both got fixed: automatic scroll to top, a loading state, and a restructured output that brings the most useful information forward.

Explaining the output was the biggest structural change. Multiple users interpreted "potential cost" and "annual cost" differently, and none of them accurately. The results page was reorganised into five named sections with ROI framing throughout, what the work costs, how long to pay back, what you save per year.

Keeping users in control came down to two specific findings. One user pointed out that only being able to select one heating type was inaccurate for most homes. Another said there was no way to skip questions they genuinely couldn't answer, and suggested showing a live estimate for users who don't know their bill. Both changes were implemented afterwards.


What we implemented from round two and why

Round two was conducted this week across fourteen users, again one-on-one and observed, on the fully independent version of the prototype. The prototype was originally conceived through Lovable as mentioned before, but then all the code was exported and made independent. The prototype now lives on GitHub where it can be easily implemented with the backend created by Nicodème. This switch allowed us to have greater creative freedom and thus flexibility to implement user feedback. The results of the second round of user feedback have not been implemented yet due to it being conducted so recently.

The second round had to be postponed due to the technical difficulty of linking up the platforms and the unstable nature of the program hosting our websites. This became apparent during second round user testing where certain features that worked for some users did not work at all for others. This was clear with the live bill estimate feature that was subsequently changed, also following internal feedback from the team.

Terminology was the most consistent issue, flagged by 64% of users, words like "draughts", "glazing", and "pellets" created confusion that directly contradicts our accessibility principle; this will be changed. One user flagged that "could save you thousands per year" is an overestimation when their bill isn't in the thousands, which is a transparency failure.

On explaining the output, one user thought the main number was what they had to spend rather than save, and another found the savings figure changing between sections without explanation. On keeping users in control, one user found the "not sure" button visually present on the heating step but non-functional, and there was no option for users without a laundry machine. For all these issues a solution will be found that will be ready for user testing round 3.

92% liked the design
100% found it easy to use
64% flagged terminology issues
+7% satisfaction vs round one

Several users specifically praised the tooltips, the landing page user statements, and the depth of the recommendations section, a decent rise in satisfaction compared to user testing in round one.


Overall journey

Looking back, the two rounds of user feedback gave us valuable insights that could subsequently be implemented in the new prototype, but it did not only show us a list of bugs that needed solving, it also revealed deeper insights regarding our design principles and messaging. The Base44 MVP proved the concept worked but assumed too much of the user: that they knew what DPE was, that they could find the information we were asking for, that they would understand what the output meant. The prototype stopped making those assumptions.

Each design principle became a concrete checklist against which feedback was measured, which meant decisions had a rationale beyond gut feeling. That shift, from building what seemed right to building what users showed us was needed, is what the iteration process actually produced.


Try the prototypes

The first version of the product was built on Base44 as an MVP to validate the concept and run the first round of user testing. You can still access it below.

Following the first round of user feedback, the entire product was rebuilt as an independent prototype with greater flexibility and a direct connection to the Supabase backend. This is the current version of ÉcoWatt.


What is next?

The next round needs to go further than testing usability. Messaging has been the central challenge from the start, the biggest failure in round one, and the explicit goal we set out to improve in round two. It got better: 92% of users understood what the product was, which wasn't the case before. But it's not solved.

The independent version introduced new language problems that weren't in the Base44 MVP, and beyond translation, the content itself still misleads in places. The next round needs to measure not just whether users can navigate the tool, but whether they genuinely understand what it is telling them at every step, and trust it enough to act on it.