Creating Your AI Strategy

Talib Morgan
7 min readApr 15, 2024
Courtesy of Midjourney

Leaders at organizations large and small find themselves considering the ramifications of AI on their businesses. It’s clear they must do something. Watching from the cheap seats isn’t an option. So, what to do? How do you create an effective approach to AI that allows you to take advantage of the technology in a deliberate way that pays off — either in ROI or in scalable learnings?

There is no approach to creating an AI strategy that will work for every organization. We know that industries and even departments can have different needs and ways of tackling challenges. There’s no universal, crystal ball answer right now. Moreover, the truth is that most leaders don’t need a specific AI strategy. Rather, they need to identify ways to integrate AI into their existing strategies that either improve or disrupt their current ways of doing business.

What I offer below are principles to keep in mind as you build your strategy.

Prioritize Experimentation

We’re experiencing a unique moment. ChatGPT was adopted by the general public faster than any technology we’ve known. It also came out of the gate swinging for the fences — delivering a very powerful capability in its v1 effort. That resulted in us seeing the technology as being more settled than it actually is.

The Large Language Model (LLM) technologies ChatGPT is based on are, in technology terms, very new. The technology is in its infancy. All of the capabilities we’ve come to associate with ChatGPT-like technologies, specifically and LLMs overall, as advanced as they may seem, are just the beginning of what’s possible with these technologies. That means any strategy you create today has the potential to be obsolete a year or two from now. That’s how fast the technology is moving. Bloomberg experienced this first-hand when they spent $10 million to train a GPT-3.5-class model (i.e., the previous version of ChatGPT) on its financial data. Then, they found that GPT-4 performed better than its $10 million effort on the same data without explicit training. We don’t quite know just yet what the next iteration of the tech will portend and how it will change what came before.

What we do know, however, is that experimentation can yield plenty of insights. Even considering the Bloomberg example, Bloomberg’s AI team understands more about LLMs than they did prior to the endeavor. They can now make more solid inferences about the possibilities of LLM-based technologies as a result of their learnings.

My recommendation is to approach AI using experiments that facilitate gathering evidence that supports making educated determinations about how to apply the technology to your use cases in the long term. These experiments will provide your team with building blocks they can use to scaffold information. Doing this well means accepting that every experiment is not going to have a positive ROI outcome but will prepare your team to adapt quickly and strategically as capabilities improve.

Agile methodologies make strong candidates for methodologies that can set a foundation for experimentation. I also believe the scientific method for experimentation, an empirical model that has been used by scientists since the 17th century, provides an outstanding framework for hypothesis-driven experimentation. I’ve outlined a high-level explanation of its steps to be used as a reference in the Scientific Method for the Customer Experience.

Specific Generalization

If ‘Specific Generalization’ isn’t cognitive dissonance, what is, right?

It should go without saying that when you’re running your experiments, the goal is to target a specific audience with use cases relevant to that audience. That’s Product Development 101. Unfortunately, that can get you into trouble when it comes to AI-based technologies.

Right now, running AI experiments can be expensive. Whether you’re using a third-party API to experiment or training a model on your own data like Bloomberg did, a significant commitment of time and capital is necessary to see your experiment through to completion. Because AI technologies are nascent, we’re all doing on-the-job training as we approach these ideas. Most organizations don’t have the benefit of years of experience to consider as they move forward. That’s precisely where specific generalization comes in.

You want to ensure that the specific use cases you use in your experiments are broad enough to be later generalized and applied successfully to other target groups. Almost all of us are interested in getting customers to spend more, but running an experiment, for example, that examines whether repeat customers on iPhone 15s are more likely to purchase after using your ChaTGPT-powered chatbot could prove to be overly specific. By running such an experiment, you’re imposing parameters that may be difficult to control and adjust for. Instead, you might consider opening up the test to more devices and perhaps a broader range of customers (and/or prospects).

There are definitely times when you want to be as specific as the example I referenced. That’s a great approach when your customers are well-segmented and you’ve established correlations that prove that iPhone 15 customers have different CX-relevant behaviors than iPhone 14, Android, and/or desktop users. The approach is especially strong when you have experience with the technology you’re testing. In the case of AI, most of us don’t have enough information on our customers’ proclivities for the technology or the function of the technology itself to warrant hyper specificity. A broader approach allows us to collect learnings that apply to a greater audience. You can then refine your approach as your knowledge broadens and your familiarity makes your parameters more certain.

Breadth of AI

What exactly is AI? An anomaly I’ve noticed is that AI has become synonymous with ChatGPT-like interfaces. It’s true that ChatGPT is the product that raised the profile of AI in the eyes of the public. In truth, though, chatbot technologies represent just one (albeit significant) facet of AI.

Artificial intelligence represents an extremely broad set of technologies that are capable of performing tasks that had generally been thought of as requiring human intelligence. Those tasks might include identifying patterns, understanding languages, making predictions, and recognizing objects. Recent technological advances have led to increasing accuracy and performance across each of those tasks. Among the types of AI that exists are:

  • Chatbots — Technology that simulates human conversation by using generative AI to determine how to respond to queries
  • Generative AI — Mathematical algorithms are fed (trained) a large set of data from which they establish rules and then create new content (e.g., text, audio, images, code, compounds, products, etc.).
  • Computer Vision — Technology that allows machines to identify and recognize objects detected through visual sensors like cameras or in images and videos
  • Machine Learning — Applying mathematical calculations to datasets to make predictive, prescriptive, descriptive determinations about the data
  • Natural Language Processing (NLP) — Technology that supports computers comprehending human language and making inferences about the intent of the language

All of these technologies can be used by you today. Let’s take generative AI as an example. Google has used a generative AI-based tool to discover 2.2 million new crystals, which they claim is equivalent to 800 years’ worth of materials knowledge. Mastercard, on the other hand, is using generative AI to create a real-time “decisioning solution” it uses to protect its payments network from fraud by predicting the safety of each card transaction.

Computer vision and natural language processing are, similarly, already widely used. Computer vision technology is known for facial recognition in public environments. It also has applications across multiple fields. I had an endoscopy recently and the scope used by the doctor relied on computer vision to identify objects appearing in the patient’s (namely, mine) digestive system. Natural language processing is commonly used as part of sentiment analysis which allows computers to assess customers’ tone in customer service messages and on social media. They’re already being used — perhaps even by your competitors.

As you consider using AI, be mindful that the scope of AI is much more vast than chatbots. There are exceptional opportunities for using AI-based technologies to both create more engaging experiences for your customers and to improve operational decision-making.

Conclusion

For all that I’ve said here, one thing I haven’t said is that in your effort to create an AI strategy, don’t lose track of your overall strategy. In over twenty years of technology leadership and consulting, I have seen leaders rush to experiment with new technologies without making an informed determination about how a technology fits into their team’s or their organization’s overall strategy. They want to use technology for the sake of using the technology. Unless you’re part of a dedicated R&D group, don’t do that.

In the Scientific Method for Customer Experiences references I linked to earlier, the first step is asking a question that is related to a customer (or business) need. Ensure your priority with any AI technology is how the technology can benefit the business.

Also, give some thought to partnership. Another mistake organizations make is believing they have to build everything from scratch. Unless your teams have experience with the type of AI technology you want to use, you might be best served by working with a vendor who has a product that achieves most of what you want to accomplish. A large luxury CPG client of mine who has done bleeding edge work in AI used a hybrid model where they’d turn to vendors for key functionality and would have their own team implement the integrations. That allowed them to reduce time-to-market for their experiments and ensure their in-house teams were exercising their knowledge muscles with the new technologies.

Finally, I would strongly recommend considering data and privacy governance before undertaking any AI experiment. The ethical use of AI and its associated data is growing in importance. In all of the excitement about using AI to help make decisions or create better experiences, some organizations have overlooked using customer data within the expectations defined in public privacy policies or the data governance established by their companies. It’s easy to do because as novices, people don’t always understand that it’s hard to make AI forget what it knows. If you mistakenly make your customers’ identifiable data part of the information you train your AI tool on, you cannot easily remove that information without starting over. Understanding your organization’s AI ethics and data governance must be a precursor to any AI experimentation. Remember, prioritizing ethics isn’t just about avoiding harm — it strengthens customer trust, which can lead to long-term loyalty and competitive advantage.

Good luck with your AI strategy!

--

--

Talib Morgan

I am The Innovation Pro. I help enterprise teams innovate their customer experiences with emerging tech in an effort to drive customer commitment and growth.