Arthur C. Clarke, one of the founding fathers of modern science fiction, once said: “Any sufficiently advanced technology is indistinguishable from magic”. Though AI seems less mystical today than it did in the sixties, we don’t always understand how it arrives at specific conclusions – also those related to our networks.
From “Terminator” to “The Matrix”, from “Battlestar Galactica” to “2001: A Space Odyssey” – the idea that artificial intelligence may supplant humankind as the dominant intelligent species on the planet has been a common theme in the popular culture for decades. In many cases, the hostility between metal and flesh stemmed from nothing more than the good old… poor communication.
Remember these? HAL 9000, an intelligent computer whose role is to maintain a spaceship while most of the crew is in suspended animation, malfunctions due to a conflict in his orders. VIKI, an AI supercomputer from “I, Robot”, logically infers from Asimov’s Three Laws of Robotics a Zeroth Law of Robotics, which allows it to justify killing many individuals to protect the whole, and effectively run counter against the prime reason for its creation.
Explain yourself, HAL!
Although these bleak scenarios seem somewhat far off, it is absolutely true that we don’t always know how the machines actually think. Powerful AI/ML models, in particular Deep Neural Networks, tend to be very hard to explain. Sometimes there is the dilemma of having to accept a particular model (a LSTM Neural Network for example) that works much better than another (a simple Logistic Regression), although it is more difficult to understand and explain.
Putting it simply (pun intended!), today’s artificial intelligence is often using a so called black box concept. The AI is fed an input and then provides an output, but what happens within the black box is impossible for us to see. In machine learning, these black box models are created directly from data by an algorithm, meaning that humans, even those who design them, virtually cannot understand how variables are being combined to make predictions.
We don’t always know how the machines actually think.
Sure, the solution itself provides great benefits for organizations, boosting their efficiency and enabling new ways of working. But explainability is extremely crucial in systems that are responsible for decisions and automated actions. The lack of proper explanation of black box ML models is already causing problems in healthcare, criminal justice, recruitment, and other fields, including IT networks.
We need to understand that ML model is always a simplified representation of reality and does not model reality correctly all the time. For example, HAL didn’t understand conflicting instructions. A good model interprets reality correctly “most of the time,” so you can imagine what a bad model might present. As part of ML model evaluation, we want to know how often we correctly identify a specific occurrence. That percentage will determine whether the model is good or bad. Even excellent models may not accurately identify everything. Which leads to these questions:
What if your problem is part of a set of incorrect identifications?
How will you know, and if you don’t know, how will you be able to override the AI’s decision?
Explainable AI for your cloud-managed networks
Explainable AI (XAI) proposes that any ML model be explainable and offer its decision-making interpretation. XAI also promotes the use of simpler models that are inherently interpretable. Here at Extreme we realize that keeping the human in the loop is important when building ML/AI applications, so we decided to build a CoPilot around the concept of XAI for our cloud-based networking management platform, ExtremeCloud IQ.
In our space, this means that any ML/AI that helps with network troubleshooting or makes recommendations regarding your enterprise network should provide readable output of how insights were derived and offer clear evidence supporting the decision. As the domain expert, a network administrator can override those decisions for any mission-critical environment. When ML/AI decisions are auditable, it removes ambiguity when it comes to accountability and helps build trust between the network administrator and the ML/AI.
By providing explainable ML/AI, ExtremeCloud IQ CoPilot enables you to automate operations, enhance security, enrich user experiences with a greater confidence than ever before.
And, as a bonus, it allows you to put magic tricks and bloodthirsty robots where they belong – in the fantasy section!