Seeing beyond the AI hype
Again, just for context the next view posts are lifted from my profile on LinkedIn.
Unlike you all, I've been having daily discussions about AI and it's implications ;-)
So I thought I'd share some ideas, if anything, to spark your thinking for further debate amongst you and your peers.
Having been in the IT industry for a few years now, it occurs to me that this is one of the first times I've seen such a significant shift in what is possible, without firm resistance from the enterprise. On the contrary they are all over this with much enthusiasm.
In times past it took considerable encouragement for them to conceive, for example that digital transformation might be a critical concept to consider especially pre-pandemic.
And in all the AI hype we are seeing at the moment there are fantastical projections that AI's will take over the world, and they will, but possibly not in the way that we currently consider it so.
You see the problem we face is that we are trying to question and provide answers to tomorrows insights using our current context. And so it is, just like Cloud Computing before it, very clear that many companies are going to spend a fortune in rushed attempts, typically led by Enterprise IT and Vendors, only to find that they're underwhelmed with the new silver bullet.
People are hurriedly putting together AI Strategies, purchasing Enterprise Software and Cloud Capacity only to find that they're not really sure what it is they are actually trying to accomplish.
This is demonstrated in a discussion I had a few days ago about the Dunning-Kruger effect in the context of the AI surge. Simply stated the Dunning-Kruger effect is a type of cognitive bias we all have where people believe they are smarter and more capable than they actually are. Given that we are all new to this it is possible to think we might be making some mistakes.
For instance, everybody wants an AI strategy. But, what is the specific business problem and or outcome that the AI Strategy is looking to accomplish. As Elon Musk often states, it's not the answer that is difficult, it's asking the right questions that is the difficult part. Given that we have a bias what is clear in my engagements is that we might be asking the wrong questions. For instance when asking for the objective of the AI strategy lots of people end up looking at me like a cow at a new gate.
Sure there are immediate gains. Personal productivity can soar when you understand how to construct a pipeline. E-mails are much better, if not longer, new recipes, diet and exercise plans are now easily within all our reach.
In my estimation that's where the hype cycle will peak. Soon we'll all be using Co-Pilot and trying to figure out what implications all that has on data privacy.
The industry titans, themselves are in a battle for market share and supremacy and so the AI Wars beset us. This will continue to drive the hype cycle until our context shifts and we start to understand what the real questions should be.
Before I share some questions that might help direct you, here's another interesting point, which I have seen played out over and over in the IT industry. It's called Amara's Law, after Roy Amara, who stated that we tend to overestimate the effect of a technology in the short term, but totally underestimate its effect in the long run. I think that quantifies the implications for AI on business and mankind.
Questions to clarify your AI Strategy.
1. Is what we are doing with AI even the right thing? For example, if we understood the impacts on the world pre-industrial revolution, would we have continued headlong into fossil fuel burning? Are we doing the same thing now?
2. Is the executive mandate and outcome clear before we spend on anything?
3. Is there a smarter less expensive route?
4. What is the executive maturity relative to AI understanding and usage? What are the learnings?
5. What are acceptable tests? Using what platforms?
6. What are the top 5 "insert your company's name here" business problems I need to resolve using AI?
7. How is an LLM applicable to the problems?
8. Do we require more than just an API into ChatGPT? (by that I mean all providers, although, is it just me or is ChatGPT becoming the Hoover of the AI Age?)
9. How much money are we investing in HI (Human Intelligence) to counter balance or surpass the AI investment?
10. And finally, it has to be here. What's our governance posture?
I look forward to your thoughts, comments and additional questions. If you want to pick up this discussion in more detail give me a ping.