[visionlist] A paradigm shift for Explainable AI (XAI) and Computer Vision - Explanation First, Model Creation Next
Asim Roy
ASIM.ROY at asu.edu
Sat Oct 25 03:26:13 -05 2025
The following paper got published in IEEE Access this week: Explainable AI (XAI) for Object Detection and Application to Satellite Imagery | IEEE Journals & Magazine | IEEE Xplore<https://ieeexplore.ieee.org/document/11214385>
The current issue of Phalanx, a magazine of the Military Operations Research Society (Military Operations Research Society > Home<https://urldefense.com/v3/__https:/urldefense.us/v2/url?u=https-3A__www.mors.org_&d=DwMFAg&c=MASr1KIcYm9UGIT-jfIzwQg1YBeAkaJoBtxV_4o83uQ&r=jmlOi54xwD7l3BRfyymQ6ZZiFOTtwVqWbKuOK_fLmbQ&m=Le9SsTcT-X3IsRvrPGUw4BbEM6hNbJNiD131CgZB7O8eS5O1UyMJ10PjeJYRMzGU&s=WvhAw6_UIkDtPKNFfy1jCkkIfzez255rmzPhz3R-e1s&e=__;!!IKRxdwAv5BmarQ!bqxdweUSfrMn0a7-hkdiqK6gDRxR0DQSlC7U2lgihgKkCsSejsAmXuNZkzkY2nQKelfxjEtfVdQ8ZfLfxriCUeIb7O7NDw$>) also has a featured article on this object recognition technology: Phalanx_Summer2025_WEB.pdf<https://urldefense.com/v3/__https:/urldefense.us/v2/url?u=https-3A__www.mors.org_Portals_87_Documents_Publications_Phalanx_2025_Phalanx-5FSummer2025-5FWEB.pdf-3Fver-3DkMHxS6JM2kiWmOs3c6QgxA-253d-253d&d=DwMFAg&c=MASr1KIcYm9UGIT-jfIzwQg1YBeAkaJoBtxV_4o83uQ&r=jmlOi54xwD7l3BRfyymQ6ZZiFOTtwVqWbKuOK_fLmbQ&m=Le9SsTcT-X3IsRvrPGUw4BbEM6hNbJNiD131CgZB7O8eS5O1UyMJ10PjeJYRMzGU&s=iCsNk_qA89iQx2pi3D9sAY0mdaohiQZ1exVuGKQFZJk&e=__;!!IKRxdwAv5BmarQ!bqxdweUSfrMn0a7-hkdiqK6gDRxR0DQSlC7U2lgihgKkCsSejsAmXuNZkzkY2nQKelfxjEtfVdQ8ZfLfxriCUeLSTyEB4A$>
The method is very simple. We build what DARPA had always wanted for Explainable AI (Explainable Artificial Intelligence | DARPA<https://www.darpa.mil/research/programs/explainable-artificial-intelligence>) as far as object detection/image recognition goes: verify the parts of objects for explanation. We simply flipped the process of building models. We do it very much the way houses are designed and built based on the needs of the clients - you meet with the clients first to figure out what exactly they want in the new house. The result of this process of building explainable models has surprising benefits, beyond just the explanation:
1. Higher accuracy than a standard, non-XAI model, and
2. Protection against adversarial attacks without any adversarial training.
Other features, things to note:
1. It's a Neuro-Symbolic System - Ought to mention that a layer of symbol processing on top of a complex pattern recognition machine works like a charm and you get the benefits of higher accuracy and adversarial attack protection. Don't need any adversarial training.
2. The EU AI Act - This approach to explainability will make it easy to satisfy the provisions of the EU AI Act (The Act Texts | EU Artificial Intelligence Act<https://artificialintelligenceact.eu/the-act/>) which requires model explainability.
Much of the inspiration and motivation for this approach came from late Prof. Horace Barlow of Cambridge University (Horace Barlow - Wikipedia<https://en.wikipedia.org/wiki/Horace_Barlow>), the great-grandson of Charles Darwin, who in his later years became a "single unit freak" and a proponent of the grandmother cell theory. He was one of the pioneer vision scientists and received many international awards. In his words on grandmother cells: "though I still hold that something conforming amazingly well to what was conceptualised by Jerry Lettvin 50 years ago, really does exist!... and of course the more widely discussed it is the better pleased I shall be, though I fear that what I have written will not be universally accepted, at least at first!" That single cell theory actually gets us to abstract concepts and symbolic reasoning beyond pattern recognition. And we have plenty of neurophysiological evidence for single cell abstractions in the brain.
We are interested in exploring the application of these ideas for computer vision systems that are considered to be high-risk by the EU AI Act. Please feel free to contact me if there's interest in building actual applications.
Asim Roy
Professor, Information Systems
Arizona State University
Asim Roy | ASU Search<https://search.asu.edu/profile/9973>
[Inside Darpa's Push to Make Artificial Intelligence Explain Itself - WSJ] [cid:image004.jpg at 01DC448B.3D82AFB0]
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://visionscience.com/pipermail/visionlist_visionscience.com/attachments/20251025/914a0747/attachment-0001.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image003.jpg
Type: image/jpeg
Size: 145809 bytes
Desc: image003.jpg
URL: <http://visionscience.com/pipermail/visionlist_visionscience.com/attachments/20251025/914a0747/attachment-0002.jpg>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image004.jpg
Type: image/jpeg
Size: 32755 bytes
Desc: image004.jpg
URL: <http://visionscience.com/pipermail/visionlist_visionscience.com/attachments/20251025/914a0747/attachment-0003.jpg>
More information about the visionlist
mailing list