In recent years, there has been a lively debate about artificial intelligence (AI) in the field of criminal law. This paper examines the admissibility of AI as evidence (AI evidence) in criminal trials, with a particular focus on standards for expert evidence. With the increasing use of AI in legal systems, concerns about the transparency and reliability of AI-generated evidence have become central. The paper highlights that one of the main challenges is the 'black box' nature of machine learning, where the decision-making process of AI is often opaque, making it difficult for judges and juries to assess its reliability. This paper argues for strict expert admissibility standards for AI evidence, as it requires careful examination of the underlying algorithms and data. However, the black box problem complicates the applicability of the standards to AI evidence. Therefore, this paper explores 'Explainable AI' (XAI) and attempts to integrate it into the standards to address this issue.