Knowledge Accessibility

5 Hurdles to AI-Powered Knowledge Accessibility & How to Overcome Them

5 Hurdles to AI-Powered Knowledge Accessibility & How to Overcome Them
7:37

AI-powered knowledge access offers a solution to information overload, but faces hurdles like accuracy, cost, and privacy. The following article, written by Skan CPO and Forbes Tech Council member, explores these challenges and suggests solutions like data quality checks and privacy-by-design principles. Read the original article on Forbes


 

In my previous article on Forbes, I shared how AI can truly transform knowledge accessibility within enterprises. 

To bring you up to speed, here is a quick recap: AI can precisely anticipate the information needs of digital workers and understand why they need it without them making explicit requests.  

Without this AI, workers need to query knowledge management systems, which might not give accurate or the most recent information due to limitations like keyword mismatch and outdated documents. 

Additionally, digital workers dedicate nearly 30% of their time to searching for necessary information, drastically impacting productivity.  

Yet, the journey toward adopting such AI-driven systems reveals significant challenges like speed and accuracy of AI, change management, cost, data hygiene, privacy and security concerns.  

Let's dive into these challenges and ways to overcome them to fully harness AI for proactive knowledge accessibility. 

Challenge #1- Change Management  

Gartner’s research reveals a sharp decline in employee support for change initiatives, from 74% in 2016 to just 38% in 2022. This downturn is driven by fears of job loss, a widespread misunderstanding of the change's purpose, and a general hesitation to adopt new technologies. 


Solution: Switching to an open-source approach (inclusive and participatory) in change management turns the old directive methods on its head. Instead of top-down orders, this method brings everyone into the conversation, boosting buy-in and reducing burnout. 

The results? Companies using this strategy see their change success rates skyrocket 14 times over. By involving employees, initiatives stick better, and people are more likely to stay engaged and committed. 

Challenge #2 - Low Speed of AI 

Using AI for knowledge accessibility hinges on two critical factors: real-time understanding and swift action. The AI must instantly grasp what the worker is doing and the context in which they're operating and respond immediately. Any lag between understanding and acting on the situation diminishes the AI’s effectiveness.  


Solution: To overcome these challenges, "miniaturized models"—compact, streamlined versions of larger algorithms—are implemented. These operate directly on digital worker’s laptops (CPUs) and are supported by edge computing.  

This setup processes data locally, minimizing delays. When tasks exceed the local CPU's capacity, data is sent to the cloud, where more powerful GPUs handle the complex processing.  

This strategic computing distribution ensures that immediate, more straightforward tasks are managed on-site while the cloud addresses more complex demands, enhancing responsiveness and efficiency in accessing knowledge. 

Challenge #3 - High Cost of AI 

The high cost of computing operations presents a significant challenge to AI adoption, with 29% of companies citing it as a major hurdle. For instance, OpenAI's substantial investment—over $100 million to develop GPT-4—highlights the potential expenses associated with sophisticated AI systems.   

This financial concern becomes even more pronounced with the need for frequent AI resource utilization, both internally and externally. 


Solution: In order to tackle the high costs associated with AI, businesses can adopt miniaturized models that operate efficiently on the laptops or desktops of digital workers for specific tasks, reducing the need for expensive hardware.  

Implementing distributed computing strategies, where simpler processes are handled locally and more complex tasks are managed in the cloud, can optimize resource use and lower costs.  

Additionally, focusing on transmitting only essential, non-sensitive data to the cloud minimizes data handling costs. This selective approach, combined with training AI for specific functions, significantly cuts operational and training expenses, making AI adoption more financially viable for companies. 

Challenge #4 - Low-Quality or Insufficient Data 

Effective AI deployment hinges crucially on high-quality data. Up to 35% of AI projects face delays or fail due to inadequate or poor data quality. Whether expansive datasets for retrieval engines or precise sample data for niche applications, the quality of data determines the success of AI models.  

Training with flawed data can lead to unreliable or biased decisions, illustrating the saying 'garbage in, garbage out'—only robust, accurate datasets ensure reliable performance. 


Solution: To ensure AI effectiveness, begin with thorough data preparation.                         

Start by profiling your data to understand its characteristics, identify formatting inconsistencies, and gather basic statistical information. This assessment helps determine the dataset’s viability or necessary adjustments.  

Next, refine your data to align precisely with your AI model's parameters. Follow this with thorough validation and quality assessment, adhering to established rules or standards.  

Finally, implement continuous quality monitoring, systematically identifying and addressing data issues to guarantee the accuracy and effectiveness of your AI models. 

Challenge #5 - Data Privacy and Security Concerns 

Data security is a significant hurdle for AI adoption, especially in industries where privacy is crucial. Companies worry about protecting sensitive information, from business secrets to employee details. Concerns about cloud-based solutions—such as data breaches and their legal and ethical consequences—make many hesitant to adopt AI despite its potential benefits. 


Solution: Incorporating privacy-by-design principles early in development ensures data protections are built into AI systems.  

Adopting a dual-model approach, where a smaller model processes sensitive data like personal details and health records in-house before interacting with external systems, can safeguard critical information. This ensures that only non-sensitive data is exchanged with cloud-based solutions, maintaining security and compliance while enabling the benefits of AI. 

Additionally, conducting regular privacy assessments and implementing robust protection strategies are vital for maintaining data privacy and security in organizations. 

Proactive, Not Reactive

By leveraging AI for knowledge access, companies cut data search times from thirty minutes to mere seconds, enhancing productivity by as much as 20%. 

AI outperforms older methods, not just in speed but in intelligence. It evolves from a reactive stance to a proactive one, acting like a GPS for the workday, predicting the information needs of digital workers before they even ask. 

Of course, a tool this powerful isn't without its challenges. However, the strategies discussed here hold significant promise for navigating potential obstacles in AI-driven proactive knowledge access, from change management and cost to speed, data cleanliness, and security. 

Stay tuned. My next article will outline a clear plan to implement this transformative technology.

Similar Posts

Subscribe to our Newsletter

Unlock your transformation potential. Subscribe for expert tips and industry news.