Irresponsible innovation anyone?
Why we need to rethink the story behind responsible innovation
High on mountain above Oslo you can see the strange angular shape of Holmekolbakken — the famous ski jump site which has dominated the horizon since 1892. In its current form, designed and built to host the 1952 Winter Olympics (and continually upgraded) it offers a terrifying glimpse of over 100 metres of cold, thin air before you land.
Which is a great metaphor for innovation, exciting, full of promise but also seriously uncertain and unpredictable.
It’s also a great location for a conference. As I discovered, visiting to speak at the AFINO conference which brings together researchers and practitioners concerned with the challenges around what has come to be called ‘responsible innovation’ (RI). The aim was to look at the concept, how it’s changed over time and to look ahead to challenges in the future. Which got me thinking — and wondering out loud in my keynote — about where we’ve got to.
Looking back is a bit misty, like the cloud which periodically wrapped itself around us at the conference. If we take a loose definition of RI it’s about thinking through and trying to anticipate the unexpected in innovation — the potential down sides or hidden angles to the picture we see when innovation emerges. To pause for thought before we plunge headlong into the exciting new world opened up. To go back to our ski jump metaphor, to look before we leap.
Today’s very visible example is, of course AI and you don’t have to look far to realise that this incredibly powerful technology comes with a lot of baggage. Not a day goes by without on the one hand new announcements about the huge potential of the technology but on the other the very real challenges it is throwing up around ethics, skills and employment, privacy, intellectual property and a hundred other concerns.
Looking back…
We’ve always been worried about the unwanted implications of innovation and that’s a healthy concern. Think about the Luddites way back in the factories of the Industrial Revolution. They weren’t a bunch of mindless vandals smashing up machines for fun, they were trying to focus attention on serious concerns about skills, working conditions and other employment-related issues around the technical ‘progress’ of the time.
Fast forward to the 20th century and ‘change management’ is a big concern; introducing innovation into factories and offices isn’t without its complications and workers seem capable of being really creative in the ways in which they can resist it. So there’s a need to think through the implications of technological change and try to ameliorate negative effects — head trouble off at the pass. But it’s not a one-sided thing; evidence also accumulated around the idea that if you involve the users of new technology (employees) early enough they can also offer insights which not only smooth acceptance but make for a better design of that innovation. From coal mining to clothing to computer systems institutions like the Tavistock and their socio-technical systems approach paved the way for more participative ways of introducing technological change.
These concerns about thinking through the implications of innovation operate at macro-scale as well. During the last century there was a growing focus on thinking through the challenges posed by some pretty ‘big ticket’ items affecting the future of the planet — pollution, resource depletion, sustainability and so on. Again this conversation was not all about ostrich emulation, burying our heads in the sand, hoping the problem will go away, or trying to turn back the clock. It was around enabling concerned conversations to try and influence progress in responsible directions. What organizations like the Club of Rome and many other think tanks were really getting at was the need for a wider debate and for more anticipation and thinking ahead about how to mitigate the negative aspects of technological change. Responsible innovation.
Nor was it just rhetoric; this was the era in which many research groups were founded taking an interdisciplinary look at these challenges and developing a toolkit to help us work with thinking about the future.
Where are we now?
Which brings us to the present day — and there is considerable cause to celebrate. Responsible innovation (masquerading under many different labels over time) is now highly visible in conversations around innovation. Whether we’re looking at AI or genetic engineering or geo-engineering we can see the issues being widely debated and discussed. Organizations, public and private, are increasingly measured on their commitment to thinking and acting responsibly in terms of key themes like sustainability and social responsibility. This has prompted a growth industry around helping develop strategy statements and corporate positions in the responsible innovation space — and in making sure this image is widely promoted.
RI has moved centre stage — and it’s not just words. There are regulations enacted to control the rate and direction of innovative activity and adoption, there are multiple research journals exploring issues raised, there are toolkits and consultants able to deploy them, there’s even an ISO standard around it. RI is very much part of the fabric of innovation thinking in 2024.
Which is where the problem comes in. By moving centre stage RI becomes something impossible to disagree with. It’s a slogan which hangs there above us, an important and good thing, along with ‘motherhood’ and ‘apple pie’. And the risk is that we swap slogan for substance; RI becomes a bit of a religion to which we all pay lip service. Try and imagine any organization today (public or private sector) talking about its ‘irresponsible innovation’ or its disregard for sustainability and the future of the planet. It’s impossible. The words are everywhere — but the meaning and action may often be missing.
So if we’re serious about RI we need to look more closely at what it is and how we can act on it. One more time — what does RI mean on the ground?
Richard Owen and Jack Stilgoe have done a helpful job in synthesising the long-running discussion around RI into a simple framework with four key dimensions. Four ‘dials, which we should check on our dashboard.
Anticipation — do we look ahead, question and explore, keep in mind the design space which we have before things get closed down into a dominant design or a technological trajectory?
Responsiveness — how far can we create design space, make choice possible rather than assume a technology driven logic. How far can we open up design space and keep it open? Digital technology is a great example; software can be rapidly reshaped and adapted, there is no fixed point at which the freeze comes in.
Reflexiveness — how far are we prepared to challenge ourselves and reflect, to explore and adopt different approaches?
Inclusiveness — how far are we prepared to and able to bring in the insights and concerns of different stakeholders?
There’s plenty of research evidence to support the view that doing these things is socially responsible but it’s also becoming clear that it makes good business sense. Let’s take the example of ‘inclusiveness’ and explore a little further how this plays out in practice.
Why on earth should we bother with stakeholders — the user dimension? Because users are key players in innovation. They are sources — sometimes as heroes, pioneering new to the world ideas. We wouldn’t have disposable nappies, skateboards, pickup trucks or ice cream cones (plus a host of other innovations) if it weren’t for the pioneering efforts of users to bring these to life. (Indeed the smart phone on which we might search for other examples is itself a user innovation, or at least the Linux software powering it is, resulting from the efforts of a community of frustrated programmers). In fact research suggests a significant proportion — 20% of products, more of processes — start life as user ideas and are then developed and commercialised.
Users are creators — and they’re also adapters, improvers, hackers. They don’t simply accept innovations, they make them work in their context. Whether it’s farmers modifying their Model T Fords to make them pay their way on the farm or mountain bike fanatics pioneering new designs to give even higher performance, users are beavering away across the product and service spectrum to enhance things. And it’s the same in our workplaces; we constantly tinker with and adapt the processes with which we work, coming up with little hacks and workarounds to make life easier. Companies which tap into these suggestion via online collaboration platforms regularly report millions of dollars in savings as a result of capturing and implementing such ideas.
It’s human nature. Think of your own experience end the many daily hacks you implement, the workarounds you develop to solve your problem.
Users have a high incentive to innovate and they’re prepared to tolerate failure and ambiguity, prototypes on the road to improving things. Frustration and excitement are very human and very powerful drivers of innovation. Of course, not every user wants to be a hero innovator — the kind of person who comes up with an idea and then fights their way through to making it happen. People like James Dyson whose frustration with his vacuum cleaner led him (via 5 years and 5000 plus discarded prototypes) to develop the bagless cleaner which bears his name. Or Tal Golesworthy whose life-threatening heart condition gave him the incentive to design and have implanted a new kind of heart valve. But many other users are prepared to interact, co-create incremental innovations as they use them — that’s why software developers place so much stress on beta testing and other ways of mobilising a community of user insights.
Others might have insights and ideas but these are locked up. They know stuff which might be highly relevant; the challenge is to find ways to listen to those voices and work with those ideas.
But beyond this idea of users as a source to help improve the design there is another key reason why they are important. As adopters or as proxies for those who will adopt. The barrier to any innovation success is moving it to scale — from the prototype and early launch to widespread diffusion. The familiar S curve is crucially where we need to focus our attention, not just at the front end. And here decades of innovation research converge on a clear message — adopters won’t always take on something new, even if it has considerable potential advantage for them. A key factor is the idea of compatibility — how well does it fit their world?
Whether it’s Washington Carver trying to persuade rural farmers in Alabama to adopt new seed types or different farming methods, or Brownie Wise explaining the virtues of Tupperware to 1950s housewives, the story si the same. Users need to be convinced that innovations will fit into their world , otherwise they will ignore them.
So it make sense to listen to users and their concerns early because if these can be addressed the chances of widespread adoption downstream are much improved. Early users are the vanguard for the rest.
All of this creates significant opportunity for innovating organizations — it’s what Eric von Hippel and colleagues call ‘free innovation’. By which they mean that where conventional innovation processes try and assume what the market wants and often limit their activity to playing safe rather than exploring uncertain new market territory, user-linked innovation offers them a key to unlock this potential. Better still, most users are not primarily motived to scale their innovations. Their incentive is around solving their own problem; if others can benefit that’s a bonus.
So it looks like a no-brainer. Involve users because you get more ideas, uncertainty reduction, ambassadors for diffusion and faster and more compatible downstream adoption. And yet…
The picture is often one in which the rhetoric of user involvement is there but is undermined by the reality of inaction on the ground. There’s the perception that involving users will somehow slow the process, inhibit the agile nature of today’s competitive innovation process.
So we have a paradox; on the one hand user inclusion offers so much free innovation advantage. And on the other there is the perception that the costs of engaging with this outweigh the benefits.
Responsible innovation is free?
It’s worth looking briefly at this paradox and at some historical parallels. Back in the last century the idea of quality was seen as a compromise — you get what you pay for and if you want perfection it costs. But a quiet revolution took place which moved us from a world in the 1960s where ‘acceptable levels of defects’ were assumed and often measured in percentage terms. Where even the rocket systems designed to keep us safe were patchy at best; one report suggested that in 1950 only 30 per cent of the US Navy’s electronics devices were working properly at any given time.
That’s moved to today’s world where zero defects is the aspiration and where tools like six sigma and the huge mindset and skill changes amongst employees to deploy them have meant that we can consistently achieve parts per million levels. Quality is the standard now, deviate from it if you dare.
That revolution has something in common with our RI question. One of the architects of the quality revolution was Philip Crosby who worked tirelessly to promote a different message about the importance and achievability of quality. His message was simple — and his book had the highly relevant title ‘Quality is free’. By which he didn’t mean that implementing the systems and training to enable high quality performance was free but rather that the investment in doing so was dwarfed by the savings associated with reducing the ‘cost of quality’. He drew attention to the fact that the real costs of quality were not just in the staff and admin costs associated with running a quality system but the long list of other associated costs, from wasted materials which had to be scrapped, to wasted time in producing them to dealing with unhappy customers whose acquisition had cost so much and who now deserted the company. These were real and significant costs; for example in 1984 when IBM first began looking at this problem they estimated that $2 billion of its $5 billion profits was due to improved quality — not having to fix errors.)
The real cost of quality arises when you don’t pay attention to it. Just as history reminds us that the real cost of non-inclusion of users/stakeholders arises when you don’t take their concerns into account, It’s a double loss; you miss out on their ideas and you fail to get acceptance and widespread diffusion.
So perhaps the time has come for us to reconsider the ‘business case’ for RI and rebuild the equations which decision-makers use when considering whether or not to invest in working alongside users. The good news is that there are many tools and methods, an increasing number of enabling spaces and much more in the way of experience. We know how to work with them to co-create more responsible solutions; the challenge is around committing to doing so.
You can find the podcast version of this and other blogs here
And a video version on my YouTube channel here
If you’ve enjoyed this story please follow me — click the ‘follow’ button
And subscribe to my (free) newsletter here
If you’d like more songs, stories and other resources on the innovation theme, check out my website here
And if you’d like to learn with me take a look at my online courses here