REQUEST INFORMATION



INSIGHTS

Author: Wills Catling

November 28, 2023

Introduction

I just returned from what is probably my last data privacy conference of 2023, the IAPP Data Protection Congress in Brussels, and as I left what was three days of amazing conversations, key notes and thought leader presentations behind me it was clear that AI Governance and Risk Management was the foremost topic on everyone’s mind.

The conference moved the needle for me on the role of the privacy officer in an organization’s AI Governance efforts, but I still feel the old adage, “it takes a village” rings true here. Who ultimately owns the responsibility for AI Governance within an organization? I’m still not sure, and based on the past week, I am not sure that a lot of privacy professionals are either. However, one thing everyone agreed on was that AI governance must include the privacy team in a meaningful way, where they truly have a voice, and their role is on par with other senior stakeholders. Trustworthy AI does not happen with the privacy office!

Depending on your point of view, AI is either the greatest threat to individual rights freedoms, the greatest invention of the modern technology era, or and this is probably the most common belief, somewhere in between those extremes. There is no doubt AI is here to stay, and we, as privacy professionals, have a critical role in how our organizations develop, deploy, and manage AI when it involves the processing of personal information.

AI Governance & Privacy Professionals 

AI is not new; we heard during Léa Steinacker’s Keynote at the IAPP Congress how early “AI” can be traced back to the mid 1800’s. Steinacker went on to discuss how Alan Turing developed an early AI machine during WWII (Hollywood captured this in the movie, The Imitation Game). However, with the increased power of computers and the advances made with Generative AI in recent years, privacy professionals will need to review their current privacy program and adapt to address this 21st-century challenge.

At the IAPP Privacy. Security. Risk. 2023 conference last month in San Diego, we heard the IAPP’s CEO, Trevor Hughes, state that AI is a call to action for the privacy community before presenting a great analogy about how the invention of brakes (privacy controls) enabled another technological breakthrough, the car (Generative AI), to operate more safely, and ultimately go faster.

I would go one step further and say that the car needed other aspects of design to make it operate safely: lights, mirrors, reliable engines, and seatbelts. Likewise, developing, deploying, and maintaining safe and trustworthy Organizations will require data security measures, sound data governance strategies, appropriate technology, data engineers, compliance experts, among other resources to develop, deploy and maintain safe and trustworthy AI. To continue with the car analogy, all stakeholders will need to be in the car to ensure it is operating as intended and all will need to be re-trained to be able to drive the car. Privacy professionals may not always drive the car, but they must have a driver’s license, and are critical to any AI governance structure.

AI & Privacy Concerns

AI doesn’t just present significant privacy risks, but there is notable overlap between the fundamental principles of a privacy program and those of a nascent AI governance program. Like privacy, AI is a rapidly shifting and evolving landscape that often changes too quickly for fixed requirements. A principles-based approach allows compliance efforts to stay flexible and keep pace.

Legislators, regulators , academics, and industry experts are releasing opinions, best practices, and guidelines that discuss the importance of focusing on compliance during development, rather than deployment. That guidance seeks to earn trust by aligning systems with ethical, and regulatory requirements.

In looking at the principles that govern a robust privacy program, it becomes clear how these principles can and should play a critical role in an AI governance model:

Lawfulness, Fairness, and Transparency:
  • Organizations should be able to explain decisions made by AI technologies, and have people involved at critical junctures for important in decision making (i.e., “Explainability”).
  • AI systems should be designed and used in a non-discriminatory way, maximizing fairness, and promoting inclusivity.
  • Organizations should evaluate fairness principles in training new models and post-deployment.
  • AI Systems should be monitored to detect and mitigate bias.
  • When using personal information for inputs, data subjects should be made aware that their data is being used for AI, how that data is used in AI applications, and that there is a lawful basis for the use. For example, data subjects should be made aware when they are interacting with AI and not a real person.
  • Organizations should identify and document the lawful basis for each processing action involving AI.
Purpose Limitation:
  • AI system owners should determine the purpose of the AI system at the outset of activity and then ensure its development and use remains compatible with the original purpose.
  • Privacy offices should evaluate if consent will be required, how consent will be obtained, and revoked by data subjects if desired, and whether there is a risk AI system will process personal data in a way not compatible with the initial purposes the data was collected for.
  • Notices and disclosures should accurately and specifically describe the purpose for data collection and use, avoiding a generic description such as “the development and improvement of an AI system”. Specifics matter.
Data Minimization:
  • Depending on the use case, organizations should develop three scenarios for training AI systems as each presents varying risk and requires different regulatory compliance approaches:
    • without personal data,
    • with personal data and
    • involving some personal data.
  • Apply PbD (Privacy by Design) Principles to mixed-data records (both personal and non-personal).
  • Developers should collect and use only personal data essential for the purpose of the AI system.
  • Avoid collecting or processing more data than is necessary for the AI system to function as expected and desired.
  • Organizations should implement organizational and technical measures to remove unnecessary personal data when developing and maintaining AI models.
Accuracy:
  • Statistical accuracy (how often the AI model guesses the correct answer) is critical to ensuring fairness in AI systems. “Hallucinations” or inaccurate outputs are unfortunately a current reality.
  • Ensuring your AI model can provide sufficiently accurate answers when making inferences about people, and again having critical decisions made by AI technologies overseen and reviewed by skilled, well trained employees.
Security:
  • AI systems must be safe, secure, and perform as intended with resistance to being compromised.
  • Security for AI cannot be one size fits all and an organization will need to consider the risks of the types of personal data and the purpose of the system and manage these accordingly.
  • AI has unique challenges and will introduce new risks, as well as potentially enhance known risks, e.g., model inversion attacks, heavy reliance on third-party code.
  • Implement controls to address security risks associated with integrating AI technologies into existing systems.
  • Implement the principle of ‘least privilege,’ which involves giving any user account or process only those privileges that are essentially vital to perform its intended functions, should be implemented, and regularly monitored and updated.
Storage/Retention:
  • Within privacy notices, and internal privacy policies, retention periods should be listed along with review cycles.
  • Factors like model drift, or possible changes in societal acceptance/expectations should be considered in policy making decisions.
  • Defining legitimate purpose for extended retention of data used for testing AI systems should be carefully considered, and once decisions have been made, well documented.
Accountability:
  • Appropriate human oversight should occur during all aspects of AI technology system development and deployment.
  • Mechanisms to ensure accountability for the impacts of AI use and decision making, including logs of when and why human intervention has overridden AI decisions should be carefully considered designed, implemented, and documented.

Conclusion

As we all continue to navigate our way through the early days and often twisty, hilly turns of emerging AI governance strategies and models, it’s clear that privacy professionals have a vital role to play in ensuring a smooth ride for the users of this ever-evolving technology. The car analogy represents how the human being wants trustworthy lights, seat belts, mirrors, and a reliable engine. Similarly, trustworthy AI thrives on the collaborative efforts of security, data governance, and compliance professionals working together in harmony. So, while privacy professionals may or may not be behind the wheel in any given organization, they are sure to play a key and necessary role as critical navigators, steering organizations toward an era of useful, productive, secure, and ethical AI use.

This concludes Part 1 of our latest AI Governance Insight Series. Stay tuned for the final part in this series where we will discuss high-level recommendations to consider when updating your privacy program to support AI Governance.

For information on how LevelUP can help with your Privacy Program, please contact Wills Catling, Director at LevelUP Consulting Partners at: william.catling@levelupconsult.com and we’d be happy to set up a consultation to hear about your program needs.

Author: Wills Catling

Youtube
Vimeo
Google Maps
Spotify
Sound Cloud
Contact Us