top of page
Writer's pictureMeaghan Frost

Is Your Data Ready for AI? Preparing Data for Copilot

Everyone knows they need to better understand and adopt AI. Where do you begin? With your data, of course. But not all data is AI-ready. Let’s learn a bit more about the steps you need to take to make your data ready to adopt artificial intelligence. 


Critical Steps to Prepare Data for Copilot (Extensions & Custom Agents) 


  1. Data Collection and Aggregation 

    • Conduct a comprehensive data inventory to understand what data you have, where it is located, and its current state. 

    • Gather relevant data from internal systems, external databases, and third-party sources. The goal is to create a comprehensive dataset that reflects the diverse and unique aspects of the business operations. 

    • Aggregating data ensures that the AI model has access to a wide range of information. 

2. Data Cleaning and Normalization 

  • Remove duplicates, correct errors, and standardize formats of your data. 

  • Data normalization ensures that all data points are consistent and comparable. 

  • Inaccurate or inconsistent data can lead to inaccurate predictions and insights, undermining the trust in the AI system.  

3. Curation 

  • Transforming clean and normalized data into something that can be used by the AI model by selecting the most relevant variables and reducing dimensionality if necessary. 

  • Establish clear and logical relationships between different data sets. This helps Copilot understand the context and connections within your data. 

  • Use standardized calculation logic for measures and adopt clear naming conventions to enhances the efficiency of report generation. 

4. Feature Engineering and Selection 

  • Level of complexity depends on the development path: extension of Copilot for Microsoft 365 or completely custom agent.  

  • Imposing a cutoff on the number of attributes that can be considered when building a model can be helpful. Feature selection helps solve two problems: having too much data that is of little value or having too little data that is of high value. Your goal in feature selection should be to identify the minimum number of columns from the data source that are significant in building a model. Check out this further insight in Microsoft Learn

  • With extensions, features are handled by Microsoft 

  • If you are building custom machine learning models or performing specific data analysis tasks, you will need to handle feature selection yourself. This involves applying statistical methods via modeling tool or algorithm to discard attributes based on their usefulness to the intended analysis 

  • Reference Learn link above to list the different algorithms that Microsoft supports in feature selection. 


Potential Risks  

  • Inaccurate or Biased Models can have serious consequences, especially in critical areas like healthcare and finance, where decisions based on faulty AI predictions can lead to harmful outcomes. 

  • Overly Simplistic Models can cause insufficient or incomplete data. This can lead to models that fail to capture the complexity of real-world scenarios. This can result in AI systems that are unable to make accurate predictions or provide meaningful insights. 

  • Data Security -  Poorly integrated AI systems can be vulnerable to data security issues such as data leaks, data poisoning, and prompt injection attacks. These risks can compromise the integrity and confidentiality of both internal and client data. 

  • Biased Predictions: Incomplete datasets can lead to biased AI predictions, while erroneous data, often due to human or measurement errors, can mislead AI into making incorrect decisions. 

  • Poor Performance: AI models trained on deficient data inputs will produce inaccurate outputs, leading to poor performance and unreliable results. This can undermine the trust and effectiveness of AI systems. 


Successful Example of Using Copilot After Proper Data Preparation 

Case Study: Interloop Client Success 


One notable example of a business successfully using Copilot after data preparation is an Interloop client in the construction materials industry. By following the critical steps of data collection, cleaning, and feature engineering, the company achieved impressive results: 

  • Operational Efficiency: The AI-driven solution streamlined various operational processes, resulting in faster and more convenient way to input data. 

  • Improved Production Insights: The clean and well-structured data enabled the AI to generate detailed production insights, helping the business to adjust engineering strategies for certain product specifications proactively. 

  • Increase Access: The AI solution enhanced accessibility to data through integrations with productivity apps like Microsoft Teams desktop and mobile. Users no longer had to navigate through layers of SharePoint to access information.  

 

The client ensured a smooth AI implementation through several key practices: 

  • Defining a Minimum Valuable Experience (MVE) – AI solutions are easily subject to scope creep. This client worked with Interloop to set a clear definition of what the first iteration of Copilot should be like 

  • Depth over Width – the client was steadfast in maintaining depth of the project. In other words, they chose 1-3 specific use cases that they wanted copilot to master instead of trying to envision all potential use cases / questions their organization could ask 

  • Launch to a Pilot Group – when launching the MVE, the client released the copilot to a small group of employees. This way they could control security, mitigate risk of failure, incorporate user feedback and test resonance with target audience. The pilot group also allowed the client to build momentum and excitement within the organization for the AI solution in hopes to drive internal adoption.


 

Get Looped In


Looking to achieve more with your data? Get looped in with one of our data experts today to explore how we can support getting your data ready for AI and for scale.

bottom of page