Abstract
Autonomous network functions (ANFs) are activated to achieve a specific objective. (e.g.: Load balancing, coverage and capacity optimization, energy saving across the network). Many times, activating the ANFs does not meet the specific objective predominantly due to external factors [1]. This paper introduces how explainable AI (xAI) methods such as feature impact analysis, dependency plot and other interpretable machine learning algorithms can be used for identifying such external factors and in turn sequencing the ANFs for meeting the objective. The paper concludes by introducing counterfactual and recourse algorithms as further research possibilities that goes beyond xAI for getting favorable outcome from ANFs.