This article is based on Collibra and UCLA Health’s discussion at the Data Citizens 2024 conference in Orlando, FL, bringing together the world’s most innovative community of data leaders to experience breakthrough solutions. Collibra puts reliable, high-quality data in the hands of healthcare data citizens.
***
Most Americans have mixed feelings about artificial intelligence, and no industry proves this more than healthcare. While AI is increasingly common for analyzing medical imaging data and making proactive recommendations in preventative care, 60% of Americans say they would feel uncomfortable with their healthcare provider relying on artificial intelligence as part of their medical care.
More broadly, AI software is expensive to develop, especially as HIPAA requirements present added security considerations for protected health information. What’s more, AI tools are not guaranteed to improve the patient-provider experience and could potentially even worsen it. Healthcare executives are doubting whether their returns on AI investments will materialize, highlighting the importance of risk assessment and impact analysis.
UCLA Health is at the forefront of AI innovation in healthcare and has partnered with Collibra to bring clear, accessible AI governance to all levels of its organization. Two UCLA Health experts — Senior Product Manager Neda Xaymountry and Manager of Machine Learning Engineering and Data Governance Timothy Sanders — shared the stage at Data Citizens 2024 to talk about how their collaboration with Collibra benefits responsible AI development.
Balancing governance and innovation
To many developers, data governance is the opposite of innovation. Categorizing and organizing large data sets can feel cumbersome compared to the rapid pace of AI development. Yet Neda doesn’t see governance as an obstacle to new ideas.
“There is a misconception that governance impedes innovation,” Neda says. “The two can coexist, but there needs to be a balance. As part of our program [at UCLA Health], we achieve this balance by encouraging innovation while still building some guardrails around it.”
These guardrails are born from risk assessment. The healthcare marketplace is flooded with AI tools for everything from administration to error reduction to diagnosis. Large healthcare systems are often decentralized, meaning even tools developed in-house could contain redundancies, inefficiencies or errors.
UCLA Health prioritizes innovation by exploring a variety of databases for AI model discovery, but governance takes shape in three core areas: access provisions, process reviews and an AI model catalog.
1. Access provisions
Reviewers need access to data in order to evaluate new AI models, but this has to happen in a secure environment. Fortunately, UCLA Health already had workflows in Collibra for access provisioning through virtual machines. Adding the AI model database to these workflows was straightforward — and a good reminder that data governance doesn’t necessarily mean reinventing the wheel.
2. Process reviews
An interdisciplinary health AI council provides oversight for the validation and deployment of AI tools at UCLA Health. To accomplish this, the council reviews a high-level summary of how the model is used, its functionality and data sources.
“There’s also a subcommittee that performs a standard assessment on these models in four different domains,” Neda adds. To streamline the review process, all relevant documentation has to be present in a consistent, trustworthy manner.
3. Model catalog
Finally, the catalog must be complete. Every AI model that is under consideration, in use, or in development at UCLA Health has to be in a centralized location for fair evaluation. Every model, regardless of its phase of development, has to be documented.
“Our program is really focused on transparency and explainability, even for models that are no longer in production,” Timothy says.
Introducing the Collibra dashboard
To meet these criteria, UCLA Health turned to Collibra. All of Collibra’s backend validation and workflows resulted in a user-friendly responsible AI dashboard. If anyone at the organization has questions about the state of AI development at UCLA Health, that’s where they can find their answers.
“This [dashboard] is our universally accessible entry point,” Timothy says. “Everyone within our organization has access to Collibra.”
For those who just need the big picture, the dashboard offers high-level fact sheets and resources for tracking the status of models in development. Testing and implementation move quickly, so the dashboard is a single source of truth for each project. No one is left looking at a document from six months ago wondering if it’s still the most recent version.
Data scientists working on the models — and anyone else looking for more technical details — can go deeper from the same user interface. Critically, many models in development haven’t made it far enough in the process to reach the oversight council, and some never will.
“One of the challenges that probably every organization sees is that a number of projects don’t go through the governance team,” Timothy observes. But these efforts are still worth tracking. Perhaps a particular data set or code logic would be useful in a future project, or maybe a developer needs to access a prior database to understand the technical lineage of a newer model currently in deployment. Having open access to everything in development can save time in product enhancement and implementation.
Other benefits of AI governance
So much of governance focuses on data centralization, but it’s more than just creating a library of facts. Collibra’s intelligent systems add new tools that accelerate model revision and educate a wider user base on AI capabilities.
API integrations
Collibra integrates with many tools already familiar to healthcare organizations, including electronic health record (EHR) systems. This can increase adoption rates and automate data entry. For example, UCLA Health integrates an intake form for feature requests on active AI models, giving developers better feedback from the user base.
Transparency
A common concern in healthcare is that patients are hesitant to have AI inform their treatment plans. Collibra’s dashboard gives the UCLA Health team clear insights into how AI models are feeding data into the EHR. If a model makes a predictive recommendation for patient treatment, a healthcare provider can see what data sets informed that output.
“We can see exactly what is coming out of [the AI model] for full end-to-end traceability,” Timothy says.
Demystification
It’s not only patients who have questions about AI. Employees within the health system have varying levels of experience with AI processes and terminology. To address this, UCLA Health includes access to a glossary of artificial intelligence and machine learning (ML) terms.
“We see this as something that can help demystify a lot of what’s happening in our AI and ML program,” Timothy adds.
Building on experience
UCLA Health’s program is impressive, and many organizations may wonder how long it takes to scale up an enterprise-wide AI governance platform. Fortunately, any previous work in data governance is transferable.
“Just because it’s the AI space doesn’t mean that you need to delete everything and start over,” Timothy advises. “A lot of these tools and processes that we have are adapted from existing [governance programs].” API integrations and process automation can also speed up the transition to an AI governance platform.
Today’s patients may still feel hesitant about the role of AI in healthcare, but future administrative and patient care processes will likely involve machine learning.
“What we have created thus far is just the beginning,” Neda says. As more AI tools become available for health systems, effective governance is no longer just an idea, but a necessity