ABSTRACT : In the competitive world of digital banking, predicting and reducing customer churn is essential for long-term growth. Traditional predictive models can forecast churn quite accurately, but their lack of transparency is a problem in regulated areas like finance, where clarity and responsibility are crucial. This study looks into how to combine Explainable Artificial Intelligence (XAI) with churn prediction models, specifically using SHAP (SHapley Additive exPlanations) and LIME (Local Interpretable Model-agnostic Explanations). We apply these methods to machine learning models that use digital banking customer data, evaluating both how well they predict churn and how easy they are to understand for users and compliance teams. The study presents a framework to assess interpretability based on fidelity, stability, usability for stakeholders, and fairness. Our findings offer real insights into the balance between model accuracy and transparency, providing practical guidance for responsible use of AI in managing customer experiences. The study aims to promote ethical AI in finance by matching technical solutions with regulatory requirements and the need for human-centered understanding.