Innovating innovation

john bessant
9 min readFeb 2, 2023

--

How machine learning is transforming the innovation game

Image: Wikipedia

One of the difficult parts of being a parent is when your kids grow up and you lose the excuse to play with their toys. And in my case one that I particularly miss is the Transformer series. Originally developed in the 1980s and accompanied by a TV spin-off these robots could masquerade as ordinary vehicles like cars and oil tankers. And then, at a crucial moment , they could reassemble themselves into well-armed fighting robots able to save mankind on a weekly basis from all sorts of alien threats. The toys were masterpieces of engineering; the underlying story clearly had staying power since there is a new generation of transformers (and video /movie accompaniment) today.

For their time they were symbols of the power of transformation, being able to adapt and repurpose to deal with new challenges. And these days we have a much more powerful and real example of such power in the form of a new generation of machine learning models.

Image: Created using Dall-E

Machine learning has its roots back in experiments with ‘artificial intelligence’ in the 1970s but has come to represent a powerful technological trajectory as the idea of mimicking human neural networks and their learning capabilities has been explored. We’ve seen with increasing frequency many bastions fall to these models; it seems a lifetime ago (1996 actually) that IBM’s Deep Blue beat chess champion Gary Kasparov deploying something of a brute force approach. But by 2016 Google’s Alpha Go model managed to beat the world champion Lee Se-Dol at the much more complex game of ‘Go’. And recent contests at which machine learning seems to have ‘beaten’ human opponents include those like poker which involve not only strategy but the ability to bluff — essentially requiring computer models to imagine what an opponent is thinking and then generate a diversionary move.

As Jang Dae-Ik, a science philosopher at Seoul National University, told The Korea Herald after AlphaGo’s victory ‘This is a tremendous incident in the history of human evolution — that a machine can surpass the intuition, creativity and communication, which has previously been considered to be the territory of human beings…..Before, we didn’t think that artificial intelligence had creativity…..Now, we know it has creativity — and more brains, and it’s smarter’.

At heart these developments reflect a fundamental shift in machine learning applications and models. In the early days models were used to help with highly focused activities — for example applied in data mining where they might be searching for something specific. But now we have generative AI, which does what it says on the tin — generates something new. And this brings the uncomfortable challenge to our perception of ourselves as the only ones capable of creativity — generating novel and useful solutions to challenges.

A quick review of the growing literature on ‘artificial creativity’ shows that there are grounds for worrying. Machine learning models can now ‘create’ music, literature or visual art to a standard which makes it increasingly difficult to detect its non-human origin.

Image: Generated by Dall-E

For example, the Next Rembrandt project was an attempt by a team of art historians, data scientists and engineers to teach a machine to think, act and paint like Rembrandt. The documentary film of this venture highlights the challenges and complexities involved in producing a painting which convinced many — 347 years after the painter’s death!

In similar fashion, there are a number of websites featuring music composed by AI in the style of — and often hard to distinguish from — the original composer. And in 2016 IBM’s Watson AI engine produced a trailer for the horror movie ‘Morgan’. This involved Watson ‘watching’ and analyzing hundreds of examples of trailers and then selecting scenes for editors to patch together into their film. This cut the time for the process from over a week to less than a day.

Which brings us to Chat — GPT and the explosion of interest in this particular model. It was launched by the OpenAI company in November 2022 as the latest in a series of generative models with the capability to come up with its own answers to questions posed to it. (Amongst its predecessors is Dall-E, a powerful image generator). The GPT stands for Generative Pre-Trained Transformer, a class of model which they have been working on for some time.

At its heart the Chat GPT model (and its equivalents in the labs of Google/Alphabet, Meta and many other companies) is a machine learning model trained on billions of facts. It has the ability to explore and analyse those and ‘learn’ how to synthesis coherent and credible answers to questions posed by a very wide and diverse audience. Within two weeks of its launch Chat GPT had attracted over a million users, and the demand is now so high there is a waiting list to access it. People have been experimenting with its capabilities to create songs and poems, write newspaper articles, answer exam questions and even to enter and pass the preliminary tests for people wishing to qualify as medical professionals in the USA!

Not surprisingly OpenAI it has seen its valuation rapidly escalate to around $29bn with Microsoft taking a significant share in the business. It’s likely that the next year will see an explosion of interest in such models with new and better variants and increasing competition from other players.

Innovating innovation

Image: Generated by Dall-E

One area where such models may well have a significant impact is in the field of innovation itself. In an excellent article Frank Piller and colleagues explore this — and the implications for innovation management. They point out that there is already increasing use of generative machine learning models in innovation; these include searching large data sources to identify insights around customer needs and using generative models to create marketing and advertising copy for new products and services.

They map their analysis of where and how such models might be used on to a typical representation of the innovation process — the so-called ‘double diamond’ linked to ‘design thinking’. Here there is a front end concerned with exploring the ‘problem space’ — understanding user needs and potential opportunities. Work at this stage involves divergent exploration followed by convergence, closing in on promising directions. It is linked to a second divergent/convergent diamond linked to exploring the ‘solution space’ and then closing in on projects to be taken further.

What they were interested in was the ways in which features of machine learning might help with these activities and the possible impact on how innovation is undertaken — and by whom.

A fascinating feature of their research is that they do so not just on the basis of informed speculation but by putting the Chat-GPT model to the test, giving it some innovation challenges to work on. Thinking about the possibilities for new products in the field of camping and outdoor activity they designed three questions to put to the model, looking for whether and how new insights might be generated to help with:

· Searching through large data sets containing information about potential new directions and trajectories

· Exploring data on customer experience and searching for new insights into potential needs

· Helping create new concepts around which innovations might be developed

All of these are typical tasks which innovation teams undertake in organizations; for example they spend a lot of time at the front end researching what has already been done, drawing in knowledge and building a picture of possible problem and solution space. They deploy a wide range of market research tools including various forms of trend analysis. And they work with a range of creativity tools to generate possible solution options for further progression.

It’s early days but the performance of the machine learning model was instructive. In exploring what is known about camping gear a Google search identified 299 million results which certainly exceeds the capacity of even a small army of human researchers to analyse! The Chat-GPT model did a good job in analysing and pulling out results of possible relevance, providing at least a powerful first-pass filter.

In its second task the Chat-GPT model managed to make sense of a wide range of customer reviews to generate insights into trends and possible needs — so-called ‘sentiment analysis’. Once again its skill in sifting through the text of thousands of reviews showed potential for providing new insights into emerging and hidden customer needs.

And in the field of creating potential solutions the researchers set the model a brainstorming kind of task — to come jup with novel and useful ideas for new camping products. The strategy here is to prompt the model with some examples of typical brainstorming insights and then allow it to learn how to generate its own. Once again the performance on the task was impressive; not only did it come up with plausible incremental innovation ideas, it also generated some radically new ones which opened up new solution space.

At first glance this kind of performance across several areas of the innovation process might seem worrying. Even though there has been a backlash to the wave of enthusiasm around generative machine learning models the overall trajectory looks ominous in terms of its implications for ‘creative’ tasks in organizations. If machine learning continues to improve how long might it be before we no longer need human beings to work in the innovation process?

The reality seems to point more towards a hybrid model in which AI supports human activity — for example by using it to sift through enormous amounts of data and extract potentially relevant information which its human counterparts can then work with. As the researchers conclude, ‘….by expanding the problem and solution spaces in which NPD (new product development) teams can operate, language models create an opportunity to access and generate larger amounts of knowledge, which in turn results in more possible connections of problems and solutions. This should ultimately lead to qualitatively superior solutions and higher innovation performance’.

So can we relax and not worry about the machines taking over our innovation role? Not really — if we want to take advantage of the powerful hybrid approach which Frank PIller and his colleagues point towards then we need to start learning some new skills and developing some new working arrangements to capitalise on it. We’re going to need a lot of innovation model innovation.

P.S. In writing this piece I did NOT make use of Chat-GPT (though I was tempoted to try! But most of the images were created in a few seconds by its sister model Dall-E in response to some simple prompts…..

You can find a podcast version of this here

And a video version here

If you’d like more songs, stories and other resources on the innovation theme, check out my website here

Or listen to my podcast here

And if you’d like to learn with me take a look at my online course here

--

--

john bessant

Innovation teacher/coach/researcher and these days trying to write songs, sketches and explore other ways to tell stories