#TFW you run across a quote from St. Augustine and things make a bit more sense.
Si comprehendis non est Deus (If you understand, it is not God you understand.)
I came across this quote again reading Salzman & Lawler’s Pope Francis and the Transformation of Health Care Ethics. It is, as always, a humbling (and healthy!) reminder to anyone who has pursued graduate studies in theology. No matter how much we think we know, we cannot completely understand the divine.
I was reflecting on this and the concept of explainable artificial intelligence (XAI) came to mind. A common talking point about AI systems is a lack of transparency; particularly for predictive algorithmic models. Not only are algorithms of this nature complicated and difficult to understand, but they are often proprietary and intentionally kept under wraps. The lack of understanding and secrecy regarding how machine learning (ML) models are built, trained, and operated is often described as a “black box.” When we see one of these AI systems in action, we don’t understand how it works and we are amazed.
Algorithm: a word used by programmers when they don’t feel like explaining themselves
This tongue-in-cheek definition of algorithm reminds us that every step of the development process is explainable. From the value priorities embedded in the algorithm creation to the collection of training data to the utilization of the model, every step of the development process can be documented and understood. The work of XAI is to open the black box and bring transparency to the development, training, and use of predictive ML models.
Augustine cautions us to remember if we understand, the thing we understand is not God. When we think of emerging AI applications like ChatGPT, we should be attentive to two things:
If we do not understand something, it does not mean that it is God. As we see headlines that portend a dystopian future in which humanity is subject to robot overlords, please remember that computers, algorithms, AI, etc. are not divine. Computers can only do what we tell them to do. Yes, if developers are sloppy at best or malicious at worst, these technologies can cause harm. The reverse is also true: when beneficence and equity are built into AI development, these technologies can facilitate great benefit.
AI is explainable & understandable. Despite industry and media-based efforts, we must remember that these technologies are explainable. Even if we–you or I–don’t currently understand the particulars, someone does. I hope this will serve as a reminder that we should move beyond the wonder and awe of what AI systems appear to be doing and maintain a healthy perspective.
Per Augustine, if one understands AI, then AI is not God. Also, don’t be fooled. If you don’t understand AI, AI does not have some magical divine power.