About this sin
Product teams are often guilty of this. Roadmaps are committed to and things are promised. No one wants to be the one that disappoints so they focus on delivery rather than performance. Experimentation is set aside as a distraction or as unnecessary. You’ll often hear the phrase, “We know it will work. There’s no need to test this. We have a lot of customer feedback.”
How to atone
Admittedly a tough one, but roadmaps need to have enough slack to allow for Experimentation and iteration. One is advised to be a polite yet persistent nag in this situation. Escalating if needed since organizations need to focus on performance rather than delivery. Often a conversation with decision makers where a CRO convinces them to always ask for experimental data during presentations is helpful. Ease stakeholders into running tests during feature releases – where a percent of users don’t see the new feature. In a worst case scenario, perform a pre/post test (perhaps with a matched t-test) – any data is better than no data. Reporting the impact of a release is often great data for product leaders to share around. If it’s determined that a KPI is negatively impacted, suggest rolling the feature back and frame it as “loss averted”.
Useful script
I understand that resources and bandwidth are tight right now. But I believe we both want to drive outcomes, not just output. How about we either roll out your feature to just a percent of your users? This way we can measure its impact as well as mitigate risk. Then you'll have data to share about the impact your work is having.
Remember that user research, as useful as it is, doesn't tell you how your new feature will impact real users. While all data has biases, experiments, namely randomized control trial ones, have less bias and generate stronger conclusions as compared to user research.
Useful resource: