AI through the rear-view mirror
What we can learn from the past to help navigate the AI storm…
There’s a technology out there that’s coming for you. You can’t escape it, it’s already acquired too much momentum and its progress is accelerating. It’s like watching a tidal wave, you can already see it on the horizon and it’s coming fast for the land. And on the shoreline people whirl about in panic, wondering what they are going to do and how they’ll cope with it.
Companies are falling over themselves to try and get their strategic heads around it, what it might mean for productivity and profit but also what havoc it might wreak in terms of their carefully constructed organizations. Will they be able to exploit the wave of opportunity which this technology seems to be bringing or will they get swamped by it, losing out to others who are more focused and agile?
Governments worry about the huge social disruption likely to come as it cuts a wide swathe of jobs which can be replaced by smart technology embedded in an army of robots. Not only pairs of hands but people with an extensive range of skills will be side-lined to the unemployment queues. And the new skills which night need urgent training to help the workforce adapt and exploit the positive benefits of productivity improvement offered by the technology are not clear, hard to plan for. Worst of all the impact of this will not be felt in a handful of old sectors which can be exchanged for new ones; this wave of change threatens across the board.
Couple that with concerns about privacy, about ethics, about how to control something whose potential seems to rewrite the rules of the game. There’s an urgent need for regulation, for setting up guidelines and guardrails but a lack of understanding about where and how to construct these and a distinct sense of trying to close stable doors after the horse has long ago made its escape and is now roaming free across the landscape.
Not surprisingly the media is full of stories about this, playing up the panic to the point where the technology is constantly centre stage, the discourse around it doom-laden. A Chicken-Licken world in which there seems to be daily reminders that the sky is about to fall in. Headlines and programme titles like ‘Beware the march of the robots’ and ‘Now the chips are down’ do little to help an atmosphere of calm critical appraisal.
Sound familiar? One way of describing the current anxieties around AI? Actually no — this version of technology angst is actually an older ancestor. These scenes come not from the AI revolution but its grandfather from forty years ago, the ‘microelectronics revolution’. Another time when the efforts of thousands of white coated scientists in physics labs all round the world converged to create the potential for widespread application of programmable devices whose inexorable progress were enshrined in Moore’s Law, the power of the chips doubling and the costs falling year on year as they took over the world.
It didn’t quite happen like the doom scrolling forecasts, though there was undeniably a major shift across the techno-economic landscape of the time. First of all it took longer than was predicted for the ‘revolution’ to actually happen and second because the socio-economic context also changed, creating new ways of thinking about and working with the powerful core technology.
Reviewing this experience might give us a clue to how we could deal with the current commotion around AI. Whether it is governments planning employment and skills policy or companies trying to find a way through the maze the message is clear. We need a strategic response, not a panic button. AI offers both threats and opportunities; coming to terms with it will depend on a considered and forward looking approach — something which research increasingly confirms.
Looking back….
So looking back might be one thing we can do to help get a sense of perspective this time around. Here are five key lessons which we’ve learned about revolutions.
(a) They take a long time…
‘Long live the revolution ‘might be a useful political slogan but it’s a bit unnecessary in the world of technological revolutions. It’s the name of the game that they are long-lived. They usually have the characteristics of what Andrew Hargadon neatly describes as ‘long fuse, big bang’ — their impact takes a while to come through but when it does it shakes a lot of foundations.
Whether we’re talking about steam power, electricity, microelectronics, internet-enabled communications technology, it’s the same old story. An initial flurry of interest generated by some impressive technological breakthroughs which then go in search of application. The process is one of mutual adaptation — far from being a simple ‘plug’n’play’ solution powerful new technologies need to be adapted and configured before they have real impact. That’s what takes the time, the learning about what and where the technology can best work and the bringing to bear of suitable skills and organizational arrangements to match it.
The emergence of electric power is a good example. For a long time electric motors were used as replacements for the big steam engines which stood at the heart of every factory, linked by a complex series of pulleys and drive shafts to deliver power from a central source. It was much later that the reframing around decentralised smaller power units for individual machines emerged and with ti the big productivity gains which the technology had to offer.
(b) They’re not more of the same, they change the game
Carlota Perez and Chris Freeman carried out extensive research looking at technological revolutions in detail and have developed a helpful model to understand and work with them. They talk about the need to change the ‘techno-economic paradigm’ (TEP) — the way we frame both the technology and the wider socio-economic context in which it emerges. Yes, they have huge impact but it not only takes time, it crucially also involves a reframing process. What Thomas Kuhn in his famous book on ‘The structure of scientific revolutions’ calls a ‘paradigm shift’.
Their research began by picking up the ideas of a Russian economist, Nikolai Kondratiev who noticed regular big shifts in economic growth which were associated with periodic surges in key technological fields. He called these ‘long waves’ and Perez and Freeman tried to explore what were the dynamics behind such significant shifts. Central to their argument is the need to think about mutual adaptation of technology and its socio-economic context. To manage implementation as a process of learning along converging pathways
The upswing of the last Kondratiev wave could be traced back to the microelectronics surge in the 1980s and the advances in information and communication technologies which it enabled — essentially the internet world with which we are familiar today. But it wasn’t a rapid transition, nor was it a purely technological one. Instead parallel changes in society, such as the rise of social networks and globalisation played a key role in shaping the context. In turn feedback loops develop — for example, the rise of the mobile phone was driven in no small measure by schoolkids adopting and elaborating a simple message system originally developed as an engineer’s tool for testing network connectivity.
By now we should have learned that there is often a hype cycle to ‘revolutionary’ technologies; they seem to promise everything but getting the bang for the buck actually turns out to be a bit trickier. Back in the 1990s the Holy Grail of computer integrated manufacturing seemed tantalisingly within our grasp. We could have whole factories running under automatic control with the huge productivity gains that implied, staffed by robots and employing only one man and a dog to keep an eye on them (the role of the man being to feed the watchdog).
Progress in key complementary areas of design, making and administration offered to revolutionise any manufacturing business. We could automate not only within these spheres but between them, sharing data and codifying everything into programs which talked to each other. Firms were falling over themselves to invest, governments added rich incentives in the form of loans and grants to accelerate the process. The bubble grew and grew.
Except that it didn’t quite work out that way. Despite spending billions productivity and performance hardly improved. Many surveys came to the same rather dismal conclusions; firms were disappointed and the gilt fell off the promise. Even GM which placed a huge bet on being able to automate its way back to competitiveness (buying EDS the biggest software company in the world at the time to help it do so, it still ended the decade in an inferior competitive position to Toyota.
That isn’t to say that the benefits weren’t there — but rather that realising them required a rethink, a strategic approach. Take the case of ‘flexible manufacturing systems’, mini-factories employing robots and machine tools linked in integrated cells. This was potentially a game changer technology but as a detailed research article pointed out, the Japanese and Americans had very different views on implementation. For the US it was a matter of automate to replace or deskill; for the Japanese it was a matter of automate to complement and enhance. The productivity gains by the latter far exceeded the technology push approach.
The key lesson — we need to consider major technological change in a wider context which involves skills, work organization and other factors.
(c) We need to understand their limitations
During the early days of the microelectronics revolution the UK’s National Economic Development Office (NEDO) convened a large-scale Delphi study into the potential implications of the technology and the policy challenges it might set up. Using its convening power as a tripartite organization representing government, business and trade unions representing employee interests it recruited a large community of experts to create their best guesses as to the future with information technology. (Delphi studies use such pictures to create an integrated picture which is then fed back and refined by the experts, gradually moving towards a well-informed synthesis of views).
In this case the results were published in a book and used to support a wide range of action-oriented workshops with key decision makers and policy designers. Twenty five years later the predicted experience was compared with what actually happened — and whilst there were inevitable gaps and missed cues many of the imagined futures had actually emerged. More important, the evidence-based input to policy making during the early days of the ‘revolution’ helped create, adapt and configure solutions to ameliorate major challenges and provide clarity about key problems.
This underlines the need for careful and informed assessment and future scanning when encountering technological revolutions. Despite the more ambitious claims about them revolutionary technologies have limitations and we need to temper the sensationalist reporting with more thoughtful technological analysis. The bridge between what might be possible and what actually can happen is a long one. Ethan Mollick captures it well with his idea of a ‘jagged frontier’ along which AI is both very advanced and proven and also underdeveloped and experimental.
(d) Progress is not linear
In the early days of the microelectronics /IT revolution the first applications of the technology were essentially around doing what we already did in a better form, using the technology as a substitute for what had gone before. It was only later along the Kondratiev wave that we moved to seeing new possibilities, to augment and extend what we did to find new and powerful applications. This maps on to the Freeman/Perez model which sees early innovations as shaped by ‘old paradigm’ thinking; it is only when the new paradigm becomes clear that the big transformative leaps emerge. Think of the differences between the early days of the internet and the big shifts with Web 2.0, the move towards platforms connecting and integrated activities in hitherto unimagined business models.
This suggests that we need to explore different ways of working with AI, moving from recruiting it to help automate clearly established current tasks through to using it in ‘co-pilot’ mode to augment and extend what we are currently capable of. Alan Brown in a recent blog offers a helpful framework in which he tries to map the dimensions of the different relationships between humans and AI in these different modes. He also adds a third mode which he labels ‘expert systems’ where complex problems can be tackled in novel ways using a combination of human and AI insight.
Significantly while the early ‘automation’ stage may well displace jobs as AI takes over repetitive and standardised tasks the other dimensions offer scope for new roles and deployment of novel human skills.
5. Revolutions need strategy, not tactics
A consequence of the above is that organizations need to think strategically about how and where they deploy AI, thinking not only of short term ‘automation’ but also exploring and experimenting with new ways of working. With this comes a significant challenge to rethink skills profiles, organizational structures and operating models. Significantly in the earlier IT revolution it was often entrepreneurial newcomers who saw the real potential and were able to use the technology to advantage while existing incumbents were often unable to move and change fast enough.
This is an urgent issue. A Gartner survey recently found that despite the belief amongst 74% of CEOs that AI will significantly impact their businesses ( up from 59% in 2023), nearly half of them report that their investments so far have not met expectations. As Alan Brown comments, reviewing a number of recent reports, ‘…It’s easy to get caught up in the excitement of the latest AI breakthroughs. However, focusing on using the newest, shiniest technology rather than addressing real user needs is a recipe for failure. The most advanced AI isn’t always the best solution for every problem…….successful AI projects demand strategic focus and long-term commitment. Industry leaders should identify enduring problems that align with organizational goals and commit product teams to solving them for significant period of time’.
The same is true at policy level; AI may offer significant opportunities for improving productivity and quality of public services but it will only do so with parallel radical rethinking of the way institutions operate. In both cases the challenge is to find new business models rather than try to graft AI onto business as usual structures. Governments (like the UK) who look to put AI as the foundation of radical changes in key services like healthcare need to do so in focussed and planned fashion, not simply throwing up the fairy dust and hoping.
We need responsible innovation
Of course it’s not simple. The past is always an imperfect guide to the future just as driving by looking through the rear-view mirror is a dangerous approach. AI differs in a number of ways from things we have seen before. We still don’t fully get what it can do — it’s still evolving and at a breath-taking pace. And we need to factor in a significant difference to earlier revolutions; this technology can, to some extent, think for itself. It has generative capabilities, can create what we can’t fully imagine.
Which means we need to take particular care and keep it very much under the spotlight.
Responsible innovation (RI) is a relatively new label for an old idea — that we need to think hard about new technologies and do so ahead of their widespread deployment. In the early fluid phase of any innovation there are opportunities to explore the ‘design space’ which they offer, and to try to shape the development and diffusion in ways which are beneficial rather than threatening to those involved. Richard Owen and Jack Stilgoe developed a helpful framework for thinking about such a responsible approach one which calls for :
· Anticipation — systematic looking ahead and exploration of potential scenarios, both positive and negative
· Reflexivity — being prepared to examine and challenge our approaches and revise them as we shape the technology
· Inclusion — ensuring that the voices of all stakeholders are heard and their views and concerns taken account of in development and diffusion
· Responsiveness — making full use of the potential within a technology to be adapted and configured.
These principles provide the potential for helping us construct guidelines and guardrails — but only if organizations involved in developing and adopting AI are prepared to take them on board.
You can find the podcast version of this and other blogs here
And a video version on my YouTube channel here
If you’ve enjoyed this story please follow me — click the ‘follow’ button
And subscribe to my (free) newsletter here
If you’d like more songs, stories and other resources on the innovation theme, check out my website here
And if you’d like to learn with me take a look at my online courses here