skip to content

Best Practices for Using AI with Good Governance, from IBM AGC Donna Haddad

by Petra Pasternak

Donna Haddad remembers how exciting it was to watch IBM’s Watson AI supercomputer beat the reigning (human) world champions on Jeopardy in early 2011. The era of natural language processing had taken off. Haddad, who was in-house counsel at IBM at the time, knew she wanted to be a part of it.

When IBM launched Watson as a startup AI business, Haddad jumped at the opportunity to help train the system in Arabic, in which she’s fluent. She went on to serve as senior legal counsel for Watson AI, helping to grow the business globally.

Fast forward 10 years. “The world changed with ChatGPT and foundation models,” Haddad says. “It's very different, but I've been excited to be working on it and see the evolution of the technology.”

Today, as IBM Vice President and Associate General Counsel, Haddad is focusing on what it takes for businesses to successfully adopt and deploy generative AI. As a founding member of IBM's AI Ethics Board, she’s involved in helping to develop guardrails for the responsible use of these powerful technologies.

Haddad recently sat down with Everlaw to discuss IBM’s philosophy and approach to GenAI, the obligations of legal professionals in helping steer the development and use of these technologies, and her optimism about the potential benefits.

donna-haddad-thumbnail
Donna Haddad, Associate General Counsel, IBM

What do you think of the massive acceleration we’ve seen in LLM capabilities?

It's so exciting. It's really changing everything. My kids knew I worked in AI, but they didn't actually know what it was. Now, for the most part, they think I'm cool, now that they know what it is!

At IBM we've all been encouraged to adopt AI and try to use it. For the last two years, the company has hosted an AI challenge where all of us get together to come up with different ideas for how to use AI. Out of that exercise came some really great use cases that are already implemented internally at IBM and also by our clients. We used the same tool, IBM watsonx, that is being used by financial institutions to detect fraud and prevent cyberattacks and by healthcare providers to help diagnose diseases and improve patient care. 

How has GenAI impacted your work life?

We're already using a couple of tools. IBM has an “Ask HR” tool that allows me to go in and tell it what I want to do, whether it's to move an employee or update salaries. It asks me questions and I respond. Then it sends me into our HR tool and walks me through the steps to help me navigate the site. It’s conversational so it’s really easy to use.

Our homepage now uses watsonx in our search bar. You can ask questions and find information quickly. If the answer that is generated isn’t exactly what you were looking for, you just modify your question and get a new answer immediately. It has been an incredible time saver and a good example of how AI is helping people do their work more efficiently.

Our legal team has collaborated with the CIO Automation Hub to develop the Nondisclosure Agreement (NDA) Accelerator, an internal tool that automates the review, speeding up the approval of client-paper NDAs using IBM’s Automation and AI capabilities. The tool uses Gen AI to determine whether the document contains certain “must-have” terms – and provides a summary report within minutes that helps the attorney move it more quickly to approval. 

As a company, we've always talked about AI as the tool in the hands of doctors, lawyers, or other professionals to help them do their job better.

We should be cautious. As lawyers, we're trained to worry about what could go wrong. And that's not necessarily a bad skill, provided it doesn't stop us from leveraging the good in the technology. One of IBM’s Principles of Trust and Transparency is that the purpose of AI is to augment human intelligence. We can’t let AI make important decisions without human intervention or approval.

That being said, I'm very optimistic about what AI has to offer. With any new technology we need to understand how it works and make sure we're using it responsibly. And that’s not only because it's the right thing to do, but because we have ethical obligations under the rules of professional conduct as lawyers that require us to do that.

I agree with the saying that AI won't replace lawyers, but a lawyer using AI will replace lawyers who don’t use it. IBM has always talked about AI as a tool in the hands of people, augmenting human intelligence, not replacing people. And for lawyers, that's legally required under our ethical rules. You can't let it do your job.

As a company, we've always talked about it as the tool in the hands of doctors or lawyers or other professionals to help them do their job better. In healthcare use cases, for instance, the technology could read scans at one level and the doctor could read them at another level. But when you put man and machine together, that's when you got the best results.

Donna Haddad and Megan Ma at Summit
Donna Haddad, Vice President, Associate General Counsel at IBM Cloud (left), and Dr. Megan Ma, Assistant Director at Stanford (right), discuss large language models at Everlaw Summit.

The technology is moving so quickly that there is no way to predict the most transformative impact. Ten years ago, training Watson, I couldn't have predicted what these tools are doing now and now innovation is coming even faster. But we know it will change how we work.

It is already really good at doing a lot of mundane tasks that nobody wants to do. I remember back in the day how I’d sit in a room reviewing documents for M&A. Nobody misses that. For litigation, too, tech tools have transformed discovery in a way that we couldn't have imagined.

I feel like AI is going to do the same thing. It's going to change the way we practice law. And it's going to have an impact on law firms, in-house teams, as well as the judiciary.

We can’t let AI make important decisions without human intervention or approval.

One of the things I think is really important is putting a governance policy in place about the use of GenAI. At this point, with widespread access to the tools, it’s important to get a handle on how your company and your employees are using it. I’m hearing some companies are ignoring it for now and letting employees do what they want.

At IBM we’re encouraging people to use it responsibly. In order to do that, you have to know how people are using it so that you can help create the governance structure and the guardrails they need.

Alex Su and Kevin Roose at Everlaw Summit
Futureproof author and New York Times journalist Kevin Roose (right) shares tips on staying relevant in the age of AI with Alex Su at Everlaw Summit.

As an attorney, which duties and ethical obligations should be top of mind?

IP ownership is a huge issue. We need to understand IP ownership and whether your AI vendor provides any IP protection.

The duty of confidentiality requires that if you’re using a third-party tool from a non-lawyer you understand how robust their security measures are. Do you know where your data is being stored and how they’re reusing it? Are they training their AI with your data? All companies should be concerned about that.

Lawyers also have to be particularly concerned about inadvertently waiving attorney-client privilege by putting confidential legal advice or information about clients into ChatGPT.

Bias is a major risk. You want to understand how the model was trained and whether the data was representative to mitigate any concerns about bias. And you want to make sure that attorneys use AI to augment their work, not let it do the work for them.

There’s also the duty to supervise AI. Lawyers have an obligation to verify that the AI's work product is complete and correct and doesn’t include hallucinations. Probably every lawyer knows about the case in New York where ChatGPT made up the cases. This obligation is ongoing as AI is constantly training on new material.

There are also regulations now that attorneys need to understand and help their clients navigate. Of course, there is the EU AI Act but there are also state and city regulations now. There’s a lot for lawyers to consider.

Lawyers and business people all need to be making sure they understand how AI is being used, how it was trained, and what implications it could have on the business. To use it responsibly, you have to do your homework and make sure that you're putting in the proper guardrails.

You have to know how people are using it so that you can help create the guardrails they need.

As a founding member of IBM’s AI Ethics Board, you take a bigger picture perspective on GenAI. How does that play out?

As in-house lawyers, beyond our legal ethical obligations, we also need to help our companies think about AI ethics more broadly, in terms of what is good for our business clients and ultimately for society.

At IBM, our AI ethics board looks at use cases to ensure they align with our values as a company and the regulatory landscape, and we look for ways to educate our business clients about using AI responsibly. In the end, the best thing for adoption is people feeling they can trust the technology and use it in a way that helps society and doesn't hurt people.

Lawyers play a big role in educating their clients to help their teams understand what they need to be thinking about when and how they use AI technology.


Join Donna Haddad at Everlaw Summit ‘24 and hear how other leaders from corporate law departments, law firms, and academia are responding to the rapid evolution of GenAI! Register today.