Black box AI puts business reliability, trust, and transparency at odds
By Stu Sjouwerman
Here’s what leaders need to know about the role of awareness and human oversight in building trust in opaque AI systems.
AI systems are gaining deeper roots within organizations as they process huge amounts of data based on machine learning algorithms to make well-informed decisions and derive business insights. But the lack of transparency into how these AI systems process data to generate their outputs is making companies and regulatory bodies question their veracity and reliability in business applications.
UNDERSTANDING BLACK BOX AI
Sometimes, AI systems—particularly those using deep learning models—predict or make decisions without offering explanations of how they arrived at their decisions, leading to so-called black box AI.
A black box AI system can be intentionally created by developers who want to keep their source code and process private to protect intellectual property. The most commonly used black box AI technologies, including OpenAI’s ChatGPT, Google’s Gemini, and Meta’s Llama, are a by-product of their training. The deep learning networks that run on huge sets of data to train these AI models work on intricate neural layers, which number in the hundreds to thousands. A typical neural network consists of three main types of layers: the input layer, hidden layers, and the output layer. AI developers cannot interpret what happens between these layers or how they work when the AI model responds to a specific prompt.
When you are not sure of how a system works, can you be sure it is safe? Can you mend it when it is broken? The black box problem is significant because organization may not rely on a system whose operation is incomprehensible. In cases where the model generates inaccurate results, it is hard to repair it without knowing its inner workings. Further, the opacity of black box systems heightens security risks, leaving room for exploitation by malicious actors, susceptibility to human biases in training, and potential violations of regulations governing the use of sensitive personal data.
HOW AWARENESS AND HUMAN OVERSIGHT CAN APPLY TO BLACK BOX AI
Cybersecurity awareness training (SAT) combined with human risk management principles can effectively address the challenges posed by AI black boxes.
AWARENESS TRAINING FOR AI LITERACY
Traditional SAT best practices typically emphasize user training on phishing and password hygiene. Organizations must broaden it to include AI literacy. To increase employee awareness, companies can educate users on the concept of black box AI and its associated challenges, including potential biases in AI systems and their potential impact on decision-making. Users should understand how to question and interpret AI outputs, particularly in sensitive contexts, and recognize their responsibility for reporting suspicious or unexplained AI behavior.
Although AI can expand the scope of deliverables and automate complex and repetitive tasks, it is still susceptible to inaccuracies and hallucinations. SAT must emphasize the need to continually review AI outputs to identify and mitigate biases and errors, particularly in high-stakes situations that can lead to legal consequences and damage to business operations and reputation. SAT can assist in making employees aware of the fact that AI cannot replace critical thinking, emotional intelligence, and ethical reasoning—human skills that drive business decisions.
AI biases can produce inaccurate results, leading to mistrust in the model. Recognizing and eliminating AI bias is no easy task. SAT can help employees understand how to identify and report potential AI biases or errors, ensuring the reliable operation of these systems. Besides, since ethical AI encourages AI’s safe deployment, SAT can prioritize ethics as a core objective among its users, emphasizing data privacy and security, transparency, and accountability.
THE HUMAN RISKS OF AI ADOPTION
It is necessary to evaluate the risks involved in users embracing AI, such as over-reliance on AI without proper scrutiny, inadvertent feeding of data that reinforces AI biases, and misleading interpretation of AI outputs due to a lack of understanding. Human risk management should create an environment that emphasizes a balance between AI tools and human critical thinking. Encouraging employees to critique AI outputs and providing a safe platform to raise and report concerns about AI without fear of reprisal can help build trust in AI-driven processes.
Incident reporting mechanisms can support efforts to manage human risks by fostering open communications among employees regarding security concerns related to unexplained AI behavior, suspected misuse of AI, and potential biases in AI outputs.
Organizations should consider developing a risk management framework that incorporates AI risk, including identification, control, and ongoing monitoring of AI-related risks. AI inputs, outputs, and decision-making processes should be audited and monitored to detect potential security or ethical problems in AI systems.
FINAL THOUGHTS
Black box AI can clearly raise significant issues related to trust, ethics, accountability, cybersecurity, and more for businesses utilizing AI technology. Organizations can address the challenges posed by opaque AI through user awareness training and adopting human risk management practices, maintaining trust in AI systems.
https://www.fastcompany.com/91332906/black-box-ai-puts-business-reliability-trust-and-transparency-at-oddsa>