Elham Tabassi Associate Director of Emerging Technologies, NIST

th-259

 

 

 

Elham Tabassi

Nearly five years ago, the U.S. National Institute of Standards and Technology (NIST) began building a program to advance the development of trustworthy and responsible AI systems. It was Elham Tabassi, an electrical engineer and chief of staff of the institute’s IT lab, who pitched moving the conversation about the impact of AI from principles to practical policy implementation.

Her suggestion proved markedly consequential. After Tabassi’s team started doing research around AI security and bias, Congress mandated NIST (part of the Commerce Department) to develop a voluntary risk-management framework for trustworthy AI systems as part of the National Defense Authorization Act (NDAA) of 2021. She proceeded to lead that effort, and in January 2023 unveiled the final framework designed to help users and developers of AI analyze and address the risks associated with AI systems while providing practical guidelines to address and minimize such risks.


Born and raised in Iran during a time of revolution and war, Tabassi says she always dreamed of being a scientist. She immigrated to the U.S. for her graduate education in 1994 and began working at NIST five years later on various machine-learning and computer-vision projects with applications in biometrics evaluation and standards. Earlier in her career, she was the principal architect of NIST’s Fingerprint Image Quality (NFIQ), now an international standard for measuring fingerprint image quality, which has been deployed by the FBI and Department of Homeland Security. (This interview has been condensed and edited for clarity.)

TIME: Could you tell me about the AI Risk Management Framework (RMF) and the goal in developing it? Is this the first AI RMF created?

Elham Tabassi: It is the first one created by NIST; it’s not an update to previous ones. So AI is not new to us at NIST. The Trustworthy Responsible AI program was put together by NIST in 2018–2019 after the advances in deep learning. Many organizations and companies were building a more purposeful AI program, and there were a lot of efforts to create ethical, responsible, and trustworthy AI. So when we were building this program, there was a competitive process, and the vision that I presented was to go from principles to practice, figuring out the building blocks of the high-level principles of trustworthy, responsible, and ethical.

We put all of our research efforts toward the development of the AI RMF and launched an open, transparent, inclusive process, and intentionally reached out to not only technology developers, computer scientists, mathematicians, statisticians, and engineers like myself, but to attorneys, psychologists, sociologists, cognitive scientists, and philosophers. We ran many listening sessions to hear about AI technology, how it’s being built, but also who is being impacted, and those were very helpful in development.

You said that the framework is voluntary and that it’s intended for agencies and organizations to use to analyze the risks associated with AI. Have agencies and organizations been using it since the framework was released in January?

Every day we learn about a new agency or entity that is using it. We really encourage organizations to let us know how they’re using it because measuring the effectiveness of RMF is something really important, personally, to me. It’s very difficult to measure how a voluntary tool is being used, but it’s an accountability thing. We are hearing more from the companies using it—many major companies. I don’t have permission to name them, but let’s just put it this way: Microsoft released a blueprint for trustworthy responsible AI that said to use NIST’s AI RMF. They said that they have aligned their internal practices and are using AI RMF.

We have heard similar things from other organizations and within the government. So one of the things about the AI RMF is that the audience is very broad. It’s for everybody that’s designing, developing, using, deploying AI. So different departments, different U.S. government agencies are also thinking about or talking with us about building profiles. The Department of Labor, for example, is building a profile of AI risk management.

And a key part of the framework is this risk-based approach. We’ve seen a lot of sectors do this, but tell me why taking a risk-based approach makes sense for AI?

I think a risk-based approach makes sense for a lot of technology sectors, including AI, because there is no one-size-fits-all. So a prescriptive measure is going to be very restrictive, and it’s not going to solve problems, it’s not going to be fit for purpose, and it can also stifle innovation. A very bad example that I always use is that if we use facial recognition to unlock our phone vs. facial recognition being used by law enforcement, you can see that there is a very different level of risk.

So with the framework, it sounds like NIST is not trying to eliminate risks from AI, but rather identify and manage those risks. Is that correct?

For us, building models and working in computer science and technology, we always say zero error doesn’t exist. We’ve tried to minimize the risk, but we’re not going to be able to build systems that have zero risk. Instead, what needs to be done is understanding the risks that are involved, prioritizing those risks, and understanding the interactions among those risks. One of the important contributions of the AI RMF was that it provides a structure to the conversations about AI risk and AI risk management and provides a sort of an interoperable lexicon to talk about it. Every entity needs to know their risk appetite, understand the risks involved, interpret the trade-offs and interactions among them, and come up with a solution.

The U.S. doesn’t have a formal set of laws for how AI should be regulated in the private sector. Meanwhile, the E.U. is already on it. What do you see as the next steps in AI regulation here in the U.S.?

We definitely need to bring some technical robustness to have laws that can be enforceable and are clear. For example, if we say AI safety or accountable AI, what does it really mean? My personal thought is that the right set of guardrails and safeguards for AI will take a hybrid of horizontal and vertical approach that brings some sort of interoperability and uniformity for understanding, managing, and measuring risk across the different domains. [Editor’s note: Horizontal regulation applies to all applications and sectors of AI and is typically led by the government, whereas vertical regulation applies only to a specific application or sector of AI]. But there are certain differences or needs for each of the use cases, so that’s why a vertical or use-case-specific measurement-standards policy would help; I think that would be a more flexible approach that allows for innovation to happen.

Translate »