AI systems can inadvertently incorporate bias from the data they are trained on. This can lead to unfair or discriminatory outcomes. Addressing bias requires developing methods to identify and mitigate biases in both data and models.
Deep learning models, while powerful, are often seen as "black boxes" because they are difficult to interpret. Efforts to make these models explainable have led to various techniques, but achieving complete transparency remains a challenge.
As AI systems grow in complexity, it becomes increasingly difficult to scale explainability methods. Researchers are exploring ways to make explainability techniques more efficient and scalable to handle large, complex systems.