Artificial Intelligence is no more just a buzzword, but nor is it the reality that people believed it would be. A 2019 Gartner CIO Agenda survey shows that 14 per cent of global CIOs were looking to double the number of AI projects in their organizations and more than 40 per cent of them expected to deploy AI solutions by the end of 2020. However, scaling AI pilots into enterprise wide production is proving to be difficult, and many are also not clear about the business impact and benefits of such deployments.
This clearly is because many still do not understand what AI is and what it can do and cannot do. That’s challenge #1. Techolpedia defines Artificial intelligence (AI) as an area of computer science focusing on creating intelligent machines that work and react like humans. That’s an ambitious definition and can cause misunderstandings. In real terms, AI is an advanced tool for data matching. Data is of three kinds: numerical, textual and multimedia that includes audio and visual. Machine learning can be called a subset of AI that is best used for textual searches while deep learning or AI can be used for audio and visual matching.
Challenge #2 is to identify the areas where AI can be used within the enterprise. This will require understanding the objective and elaborating the AI deployment strategy to achieve that objective. The Gartner survey indicates that 42 per cent of users do not understand the benefits.
For a retail customer, Indium Software was able to build a cognitive analytics model leveraging security cameras across the store to generate store heat-maps and visualization. Using facial recognition and tracking, information on how many customers exit the store without making a physical purchase was extracted. Insight into variations in customer behaviour with regard to footfalls by area of the store, time of the day etc. was also enabled using AI technology.
Challenge #3 But before we move on to tools and teams, we need to address the challenge of developing Input Data. AI needs the black box to be trained before it can be used for the purpose. The training should be followed by testing to ensure that it is performing correctly. Therefore, to be able to develop an efficient AI tool, you need:
- Sufficient training data
- Sufficient testing data
- Sufficiently defined features
For instance, for one of Indium Software’s real estate clients, the AI team was able to leverage deep learning technology to extract boundary description of plots from a large number of PDF deed and lease documents. By using leveraged text analytics method (RegEX, NER), Indium was able to automate the text extraction and reduce manual intervention. This is a great time saver for the client as well as it helped improve efficiency.
Challenge #4 is thatthe training has to be ongoing and evolving to include new situations and datasets to keep pace with the evolving world. This again brings up the need for sufficient training testing data with sufficient features labelled to allow the tool to identify and eliminate as the case maybe.
Challenge #5 is linked to the need for a clear objective and identifying from the various algorithms available to know which one is ideal for providing the best results. This requires a good AI/ML Model design and architecture that can best fit the goal.
Challenge #6 is knowing which is the right AI dev tool / platforms for your organization. Most organizations use OpenSource Python, which requires extensive coding. Most cloud service providers such as Google and Microsoft also provide AI tool sets, which is relatively easy to use. But while they allow compatibility, naturally, they work better with other tools from their own family. Customization is also not possible, which can be severely restricting. A business cannot invest in multiple tools and benefit from the strengths of each. A specialist like Indium, on the other hand, has access to the multiple tools in addition to their own proprietary tool, tex.ai (pending patent), which gives them more flexibility.
Challenge #7 is the computational power that AI demands is very high. And this, naturally, also comes at a cost, which is why only the likes of Google and Microsoft can do it, not the smaller firms. This is one of the reasons why pilot projects are unable to scale up.
Challenge #8 is in building the AI team. These are specialized skills that come at a cost. With uncertain end results, the AI team’s work is at best research and development oriented and an investment till it can start assuring results. This is something the business has to be ready to spend on. Also, a business will tend to build a team with domain expertise. AI projects, on the other hand, need cross domain experience for creatively addressing challenges and try out new things. Also, working on a combination of tools is more beneficial, but an individual business will have to be ready to make that kind of an investment.
Challenge #9 istheuncertainty is a big risk that businesses have to be ready to invest in. Gartner identifies this to be one of the fears that stops businesses from going to deep into AI technologies. Moreover the benefits cannot always be quantified and there have to be ways to measure intangible benefits too.
Challenge #10 is the fear that it will eliminate jobs. This fear can hinder the implementation of AI projects. However, Gartner estimates By 2020, that AI will in fact create 2.3 million jobs though eliminating 1.8 million jobs. AI cannot replace humans but enhance their efficiency. So while humans will become redundant for mundane tasks, they will be required for more evolved roles.
To overcome these hurdles, businesses can partner with AI specialists. Specialists such as Indium Software bring cross-domain expertise, experience in working with multiple platforms and tools and have the required infrastructure. This can keep costs low and let businesses focus on their core area instead of investing on AI technology and team. Indium’s expertise can also provide creative approaches to problems drawn from their past experiences.