In the bustling heart of the financial world, a man named Dr Paul Dongha stood as a beacon of ethical wisdom and technological prowess. His journey began in the 1990s, a time when AI was a fledgling concept, and Paul was among the pioneers, diving deep into the realms of artificial intelligence with a passion that would define his life’s work. He spent years teaching and researching AI, motivated by the possibilities it held for the future.
Fast forward to the present, and Paul has become a guardian of integrity in the ever-evolving landscape of data and AI ethics. Currently, as Head of Responsible AI and AI Strategy at Natwest Banking Group, and before that as Group Head of Data & AI Ethics at Lloyds Banking group, his mission is clear: to ensure that the powerful tools of AI are used responsibly and ethically, safeguarding both the organizations, customers and society.
Paul’s vision for an ethical AI framework is comprehensive and deeply human. He believes that technology alone cannot navigate the murky waters of ethical dilemmas; human judgment is essential. Key to achieving this is through the creation of an AI ethics board.Paul has co-chaired an an organization-wide ethics board, a forum of senior leaders tasked with making critical decisions about the deployment of AI models. This board ensures that every model is scrutinized not just for its technical merits, but for its potential impact on society and individual lives.
Paul also understands that knowledge is power. He champions rigorous training, communication, and awareness programs across the organization, ensuring that everyone, from data scientists, risk management and compliance teams to executives, is are well-versed in the ethical risks and responsibilities associated with AI. His commitment to fostering a culture of ethical awareness is unwavering, recognizing that a well-informed team is the foundation of responsible AI deployment.
In his heart, Paul knows that AI holds immense potential for good. He envisions a world where AI can bring tailored learning to underserved communities, revolutionize healthcare with groundbreaking discoveries like Alphafold 3, and drive sustainability initiatives that protect our planet. His dedication to these ideals is a testament to his belief in AI’s power to transform society for the better.
Through his tireless efforts, Dr Paul Dongha has carved a path where technology and ethics walk hand in hand, ensuring that the future of AI is bright, responsible, and profoundly human.
Building a Responsible AI Assurance Framework
Paul argues for a generative AI assurance framework that covers people, process, and technology across an enterprise.
On the people side, an ethics board or committee should be established. This committee, composed of senior leaders, would be responsible for making decisions on model deployment considering ethical risks such as bias, discrimination, reputational harm, etc. Training programs on AI ethics and responsible AI should also be implemented for various personas across the organization, including data scientists, non-technical users, legal, risk management, data privacy teams, and executives.
On the process side, for organizations with complex business processes, such as banks, ensuring model validation and risk management teams have the right support, technology, and skills is crucial. Additionally, existing data science or DevOps lifecycles should be augmented to include ethical checks, focusing on explainability, bias removal, and fairness during model creation.
Finally, on the technology side, building guardrails into the models themselves is necessary. Techniques include debiasing training data and using libraries like Interpret ML, Fairlearn, and AIF360 to detect and address bias throughout the data science lifecycle. Model cards documenting the model’s characteristics, assumptions, training data, testing results, and ethical impact assessments are also crucial for responsible AI practices. Lastly, maintaining a model inventory that tracks all models in production, including risk levels, deployment duration, and retraining needs, is essential for effective oversight.
Rise of AI and the Urgency of Ethics
Paul Dongha describes his personal journey in the field of AI.
After earning a PhD and working in AI research and education during the 1990s, the lack of career opportunities in AI led him to pursue a successful career in finance. Paul has lead manay enterprise-wide transformation programs, building and designing complex quantitative systems involving Big Data feeds.
However, around 2018, he recognized the significant advancements in machine learning and AI being utilized by major companies. This resurgence of AI, particularly the complex and probabilistic nature of generative models, sparked a renewed concern for the ethical risks involved.
By 2020, the potential for rapid AI growth solidified his belief in the critical need to address these complex ethical issues. This realization reignited his passion for AI, leading him back to the field in 2019.
Leading the Way in Ethical AI
Paul reports encountering a positive reception for his work on ethical AI. The companies he’s worked with have expressed genuine interest and curiosity in understanding this concept, despite its perceived mysteriousness. He attributes this openness to the growing awareness of ethical risks in AI and the potential for reputational damage if these risks are not addressed. Consequently, he’s been welcomed by development teams, data scientists, and machine learning engineers who are eager to collaborate and ensure responsible AI practices.
Urgency of Ethical AI Frameworks
Paul argues that ethical considerations surrounding AI are universal across industries. While inherent risks like explain ability, bias, and robustness apply to all AI models, the severity of consequences varies by sector.
High-stakes sectors like healthcare and finance face particularly critical ramifications for model errors. Public services are also concerning due to the potential impact on people’s lives.
In light of these risks, Paul emphasizes the importance of a robust assurance framework for all organizations using AI. This framework, driven by senior leadership, s should establish safeguards to mitigate ethical risks across various industries. He concludes that there’s no avoiding these challenges, and responsible AI development necessitates addressing these issues head-on.
Building a Responsible AI Future
Paul stresses the importance of robust program management when implementing a generative AI assurance framework. Success hinges on several key principles. First, a clearly defined vision and measurable goals ensure everyone is working towards the same objective. Second, strong leadership commitment is crucial to drive collaboration between the various teams involved. Third, AI ethics specialists play a critical role in bridging the gap between technical teams and fostering an understanding of the ethical considerations behind responsible AI.
Furthermore, a successful program requires an organizational culture that acknowledges the importance of ethical AI and fosters a sense of shared responsibility. The evolving nature of the AI field necessitates an adaptable framework that can accommodate ongoing learning and experimentation. This dynamic environment, though challenging, also presents exciting opportunities for innovation. Finally, Paul highlights that the problem-solving nature and novelty of building an AI assurance framework can be motivating for engineers, fostering a positive program environment.
Staying Informed and Proactive
Paul Dongha addresses the challenge of navigating the ever-evolving regulatory landscape surrounding AI. The global nature of AI development presents a complex situation with various organizations formulating regulations. These include the OECD’s principles, the EU’s legislation, and individual US states enacting their own laws. Similar trends are observed in Canada and other countries, with industry-specific regulations emerging as well.
To stay informed, Paul recommends establishing a dedicated horizon scanning team within an organization. This team, consisting of just a few people, would be responsible for monitoring publications from relevant organizations like the OECD and EU. Legal firms specializing in tracking these regulations and communication with industry regulators are also valuable resources.
Staying ahead of the curve, however, proves more challenging. Rather than attempting to predict future regulations, Paul suggests a proactive approach. Organizations should identify potential risks within their specific AI use cases and proactively mitigate them. This focus on responsible AI practices aligns with ethical obligations and positions them to contribute meaningfully to shaping future regulations based on real-world experience.
Measuring the Effectiveness of a Generative AI Assurance Framework
Paul emphasizes the importance of measurable metrics to assess the effectiveness of a generative AI assurance framework. He proposes a set of key metrics that track various stages of the model development process.
- Ethical Impact Assessment Completion: This metric monitors the number of models that have successfully completed a mandatory ethical impact assessment.
- Ethical Impact Assessment Success Rate: This metric goes beyond completion rates and focuses on the percentage of models that pass the ethical impact assessment, indicating a successful mitigation of potential ethical risks.
- Ethics Board Review: these metric tracks the number of models presented to the ethics board for additional scrutiny and approval, ensuring a higher level of oversight for potentially sensitive models.
- Model Validation Success Rate: This metric focuses on the technical aspects, tracking the percentage of models that pass the model validation team’s assessments, ensuring the models function as intended without technical flaws.
- Model Inventory Completion: This metric tracks the number of models that have successfully reached the final stage by being added to the organization’s risk model inventory. A complete inventory provides a clear picture of all deployed models for ongoing monitoring and risk management.
By monitoring these metrics, organizations can gain valuable insights into the effectiveness of their AI assurance framework in identifying and mitigating risks throughout the entire AI development lifecycle. These metrics not only assess the health of the framework but also demonstrate a commitment to responsible AI practices.
Mitigating the Unforeseen
Paul recognizes the inherent difficulty of anticipating unforeseen consequences with AI. However, he proposes two key strategies to mitigate these risks.
Firstly, Paul emphasizes the importance of fostering diversity within data science teams. This includes diversity in terms of gender, background, culture, and age. A broader range of perspectives during model development can help identify potential biases or unintended consequences that a homogenous team might overlook.
Secondly, Paul describes a structured process called “consequence scanning.” This proactive approach involves a series of steps for teams to identify potential negative behaviors a model might exhibit. By considering “what-if” scenarios and corresponding mitigation strategies, teams can address potential issues before they arise. For example, consequence scanning might involve examining how a model could unintentionally discriminate against a certain group, even if such discrimination wasn’t the intended purpose. Additionally, this process includes analyzing the training data for potential biases or a lack of diversity that could lead to skewed outcomes when deployed in different regions.
By implementing these two strategies, organizations can take a proactive approach to mitigating unforeseen consequences and fostering more responsible AI development.
Beyond Code
Paul dispels the myth that data and AI ethics specialists require extensive technical expertise. While a foundational understanding of probabilistic models, common algorithms like gradient descent and XGBoost, and transformer models is beneficial, the core competency lies in ethical considerations.
A strong understanding of how and where bias can creep into AI models is crucial. This includes recognizing bias in raw data, training data labeling, discrepancies between training and testing data sets, and potential mismatches between training data origin and deployment location.
Beyond technical knowledge, Paul emphasizes the importance of empathy for the impact of AI models on users.
Most importantly, Paul highlights passion as the most critical quality. A genuine enthusiasm for the field and a desire to leverage AI for positive outcomes are essential for success in this role.
AI for Social Good
Paul expresses optimism about the positive societal impacts of AI. He highlights several examples:
- Personalized Customer Interactions: AI enables highly personalized interactions with customers across various sectors, including banking, eLearning, and more. This allows for tailored experiences and targeted information delivery, exceeding previous capabilities.
- AI on Mobile Devices: AI systems can be delivered on mobile devices, making them accessible even in regions with limited traditional IT infrastructure. This opens doors to applications like personalized learning opportunities in regions with underdeveloped educational systems.
- Medical Advancements: AI is playing a significant role in medical breakthroughs. For instance, DeepMind’s AlphaFold 3 has the potential to revolutionize disease identification, treatment, and vaccine development, leading to a major boost in drug discovery. Personalized medicine also holds immense potential.
- Environmental Sustainability: AI can be a powerful tool in achieving the UN’s Sustainable Development Goals. It can be used to predict pathways to net-zero emissions and optimize energy efficiency. For example, AI models are being used to optimize workload distribution in data centers, leading to significant efficiency improvements.
Paul emphasizes that AI ethics plays a crucial role in maximizing these benefits by mitigating potential harms and limitations associated with AI development. He expresses his enthusiasm for the future of AI, particularly its potential to solve problems and contribute meaningfully to society.