...stakeholder feedback is often overlooked but critical
- 1. How do you see AI and predictive analysis evolving in the next 2-3 years?
AI is likely to become more accurate, reliable, and cost-effective, especially as agentic design and guardrails for security, ethics, and responsible AI use continue to improve. However, given the generative nature of AI, complete accuracy and the elimination of bias may remain challenging. While we can expect AI to become increasingly precise, it is unlikely to ever be entirely free from errors or biases.
One notable shift could be the automation of analytics engineering. Advances in generative AI and large language models (LLMs) have the potential to significantly reduce the manual effort involved in data preparation and transformation, particularly if data dictionaries are well-maintained. Historically, a large portion of data science work has been spent on data cleaning rather than modeling. If AI can take on more of this burden, predictive analytics workflows may become more efficient. That said, the modeling process itself may not be fully automated anytime soon. Over the past decade, data scientists have increasingly relied on existing tools and prebuilt models rather than developing them from scratch. This suggests that while AI can assist with certain tasks, human expertise—particularly in evaluating model validity—will likely remain essential.
From a business standpoint, predictive analytics may become more accessible to smaller companies, but it is uncertain whether this will lead to true democratization. Many startups and small businesses still lack the resources to maintain in-house data science teams, and while hiring data scientists on a project basis is becoming more common, predictive analytics is not yet at a stage where it can be fully automated or easily adopted by all organizations.
Another emerging trend is the increasing focus on interpretability and transparency. As businesses integrate AI-driven decision-making into their strategies, the need for explainability tools is likely to grow. These tools could help with compliance, stakeholder trust, and debugging, but their adoption will depend on factors such as regulatory pressures and industry-specific needs.
2. How do you measure the success of a predictive analysis project completed by AI?
Common Model Performance Metrics:
Accuracy – The proportion of correct predictions.
Precision (Positive Predictive Value) – The fraction of relevant instances among the retrieved instances.
Recall (Sensitivity/True Positive Rate) – The fraction of relevant instances that were retrieved.
F1 Score – The harmonic mean of precision and recall.
AUC-ROC (Area Under the Curve) – Measures a model’s ability to distinguish between classes.
RMSE (Root Mean Squared Error) – Evaluates the average magnitude of errors in regression models.
Beyond standard metrics, stakeholder feedback is often overlooked but critical. Business teams are closest to real-world operations, and if they find a model unreliable or misaligned with actual conditions, its success is questionable—even if its performance metrics look good. Often, gaps between model output and business needs arise due to missing or misrepresented data. Stakeholder trust and adoption are crucial; if a model isn’t used, it’s simply consuming resources without delivering value.
Balancing Accuracy with Business Impact:
Business impact always comes first. AI models are not built for the sake of accuracy alone but to drive meaningful business outcomes. A quickly built model with slightly lower accuracy but immediate business impact is often more valuable than a highly accurate model that takes too long to develop.
3. What are the 3 biggest technical challenges decision-makers will face when implementing AI-powered predictive analysis, and how can they overcome them?
1. Data Readiness
Challenge: AI models require high-quality, relevant data. Inconsistent formats, missing values, and disparate sources can hinder performance.
Solution: Establish robust data governance frameworks, enforce data cleaning and validation processes, and use data integration tools to ensure consistency and reliability.2. Monitoring and Maintenance
Challenge: AI models can degrade over time due to shifting data patterns (model drift). Without proper monitoring, they may produce inaccurate results.
Solution: Implement continuous monitoring, set up retraining pipelines, and use automated ML tools to adjust models based on evolving data.3. Integration with Existing Processes
Challenge: AI insights must seamlessly integrate with business workflows, but stakeholder resistance and technical incompatibilities can create roadblocks.
Solution: Build AI literacy across the company, involve data teams in process and software development to ensure data readiness, and design AI solutions with interoperability in mind.
Industry-Specific Concerns:
Security and safety remain major challenges, especially in highly regulated industries such as finance, healthcare, and education. Ensuring AI compliance with regulatory frameworks while maintaining model accuracy and fairness will be an ongoing challenge for decision-makers.
Stella Wenxing Liu is a Lead Data Scientist at Enterprise Technology, Arizona State University. With 14 years of experience working with AI, data, and people, she has worked across machine learning, AI, logistics, and e-commerce. Now in higher education, she focuses on building ethical and safe AI solutions. She also shares insights through her newsletter, The Cocoons, and co-hosts a Mandarin podcast on mid-career experiences in tech, 数据女孩的中年危机.