Organizations are under intense pressure to leverage AI as a competitive advantage. However, using AI without proper governance or visibility and control of the underlying data can lead to model bias, inaccuracies, legal and ethical implications, trust issues and more. This glossary will help you understand words, terms, and phrases you’ll come across as you explore AI and AI governance.
A
-
AI governance is the application of rules, processes, and responsibilities to drive maximum value from your automated data products by ensuring applicable, streamlined, and ethical AI practices that mitigate risk, adhere to legal requirements, and protect privacy.
-
The simulation of human intelligence in machines that are programmed to think and learn like humans. AI can take on tasks that were previously done by people including problem solving repetitive tasks.
-
How AI models will be used to solve a specific problem. Examples include chatbots, fraud detection, and personalized product recommendations.
-
A member-based foundation that aims to harness the collective power and contributions of the global open-source community to develop AI testing tools to enable responsible AI. The Foundation promotes best practices and standards for AI.
Learn more about AI Verify Foundation -
Refers to AI systems with human-level cognitive abilities, capable of understanding, learning, and applying knowledge across various domains autonomously.
-
Clearly defined roles and responsibilities for all stakeholders involved in the AI lifecycle, including developers, data scientists, business users, legal and privacy professionals, to report, explain, or justify AI model output.
-
A methodology or collection of directives and regulations crafted to execute a defined task or address a specific issue, employing computational resources.
-
An official, independent examination and verification to ensure that AI and the data that drives it meet specific criteria set forth by governing or regulatory entities.