Some artificial intelligence (AI) researchers are trying to fix the flaws of neural networks. Deep learning models are playing increasingly important roles across a wide range of decision-making scenarios. Unfortunately they have an inability to provide human-understandable motivations for their opaque or complex decision-making processes. This interesting topic came to us from Synced in their article, “Logic Explained Deep Neural Networks: A General Approach to Explainable AI.”
Deep learning is extremely beneficial to data scientists who are tasked with collecting, analyzing and interpreting large amounts of data. By using machine learning and deep learning techniques, you can build computer systems and applications that do tasks that are commonly associated with human intelligence.
A group of researchers have proposed a general approach to explainable AI in neural architectures, designing interpretable deep learning models. You can read further about their proposal here.
Data Harmony is Access Innovations’ AI suite of tools that leverages explainable AI for efficient, innovative and precise semantic discovery of new and emerging concepts to help find the information you need when you need it. Through our classification and indexing engine, Data Harmony provides concept identification and extraction to recommend terms and complete your semantic model.
Melody K. Smith
Sponsored by Data Harmony, a unit of Access Innovations, the world leader in indexing and making content findable.