Onur Korucu-Managing Partner | Non-Executive Director | Advisory Board Member
WomenTech Global Ambassador & Council Member | IAPP Advisory Board Member and Knowledgenet Chapter Chair
Onur Korucu currently serves as Managing Partner, Non-Executive Director, and Advisory Board Member across corporate and academic institutions in EMEA and the US, where she shapes strategies in data protection, cybersecurity, and AI governance. She is also a shareholder in companies specializing in data protection, cybersecurity, and AI automation.
Onur was honored by Business Post as one of the “Top 100 Most Powerful People in Tech in Ireland,” and she is the only woman from the Cyber Security field featured on this prestigious list. Recognized for her groundbreaking contributions, she was awarded the IAPP Privacy Vanguard Award 2024, a highly esteemed honour given to only one individual per continent, which she received for EMEA in recognition of her leadership in privacy and AI governance.
Her professional journey spans leading roles at multinational professional services firms such as KPMG, PwC, and Grant Thornton, where Onur managed cybersecurity and privacy teams across global regions. She has also served as Head of Data Protection and Cybersecurity at international firms and as Senior Group Manager for GRC, Cybersecurity, and Data Protection at Avanade (a joint venture of Microsoft & Accenture) UK & Ireland.
In addition to her technical engineering and M.Sc. degrees, Onur holds an LL.M. degree in Information and Technology Law and completed an executive master’s program in Business Analytics at the University of Cambridge. She serves as a lecturer for Cybersecurity and Data Protection Master’s programs at universities in Istanbul, Dublin, and London, and she is a member of the advisory board for the MS in Cybersecurity program at the University of California.
As an author and thought leader, she has contributed a book on risk-based global approaches to enhancing data protection and published articles in prestigious outlets such as the Harvard Business Review, covering trends in technology, cybersecurity, and governance.
Onur is a Women in Tech Global Ambassador and the International Association of Privacy Professionals (IAPP) Advisory Board Member and Dublin Knowledgenet Chapter Chair.
Her achievements have been further recognized with nominations and awards, including the PICASSO Europe Privacy Award in the Privacy Executive category, GRC Role Model of the Year, Technology Consulting Leader, Cyber Women of the Year, Risk Leader, and The Technology Businesswoman awards.
In an era where data drives every interaction, the lines between personalisation, security, privacy, and innovation are more blurred than ever. AI is no longer a futuristic concept, it’s an everyday reality, transforming how we work, live, and connect. But with each leap in technology, new questions arise about trust, governance, and the responsibilities we all share in shaping the digital world.
Last week at the Dublin Tech Summit, I had the privilege of moderating the “Hit Me with Your Best Bot” panel, where we explored the evolving role of AI-powered chatbots, from companions to potential decision-makers. During that conversation, I shared a personal experiment: a few days spent with an AI chatbot designed to mimic companionship and even offer a “virtual partner” experience. While the bot was impressively responsive, it never truly crossed the uncanny valley from mimicry to genuine connection.
Yet these experiences are just the rough drafts of what’s coming next. At Google I/O 2025, we saw multimodal AI like Project Astra and Gemini 1.5 Pro push the boundaries further. These systems can see what you see, remember your context, respond with emotional nuance, and even help you complete tasks in real time. They’re not just reactive, they’re contextually aware. They don’t just talk; they listen, adapt, and anticipate.
OpenAI’s collaboration with Jony Ive on a groundbreaking new AI device is another spectacular example of how technology is reshaping the world we live in. This futuristic platform blends design and AI to create a presence that feels less like a product and more like a partner, an assistant that not only answers questions but also predicts needs and integrates seamlessly into daily life. It’s a reminder that we’re moving into an era where technology doesn’t just assist us, but actively shapes the way we experience the world.
With these leaps forward, the question isn’t just can AI become our companions, therapists, teachers, or mentors, but should they? When algorithms can mirror human behavior with uncanny precision, the risk of manipulation becomes a design feature, not an accident. As Harari warned, “Once you understand how someone feels, you can influence how they behave.”
Movies like Her highlight the comfort of intimate AI relationships, but also the fragility of those connections. Ex Machina goes even deeper, showing us how emotional intelligence can be weaponized. As we push the boundaries of what AI can do, we must also question how these systems reflect—and reshape—our humanity.
AI Governance: The Shift from Reactive Compliance to Proactive Strategy
One of the biggest misconceptions I see in the industry is the belief that once a model is trained and deployed, it’s “done.” In reality, AI is dynamic: it learns, it drifts, and it adapts. That’s why AI governance isn’t a “nice to have”; it’s essential.
The regulatory landscape is evolving rapidly. In the EU, we have the AI Act, GDPR, and soon the Data Act. The U.S. is more fragmented, with state-level privacy laws and sector-specific AI guidance. Other countries like Canada, Brazil, Japan, and China are all pushing out their own frameworks. For global companies, it’s not just about compliance, it’s about constant adaptation.
The most effective companies I’ve worked with don’t wait for the law to catch up. They build internal structures that allow them to act responsibly, regardless of jurisdiction. That means:
- Building risk-based AI governance programs that classify AI systems by use-case risk and impact.
- Embedding privacy and ethics into development from the start, not as an afterthought.
- Creating cross-functional teams—legal, privacy, security, engineering, product—that break down silos and keep governance agile.
- Monitoring regulatory developments continuously and aligning local compliance with global standards.
Security by Design: AI as Both Shield and Target
AI is a powerful ally in cybersecurity, but it also introduces new risks. AI-driven phishing attacks, deepfakes, and automated hacking tools are becoming alarmingly sophisticated. At the same time, AI systems themselves can be targets. Techniques like AI poisoning and adversarial attacks can compromise models, creating vulnerabilities in systems meant to protect us.
Transparency is key. AI models must be explainable, auditable, and aligned with ethical guidelines. This builds not just regulatory compliance, but trust, an increasingly scarce commodity in the digital world.
Personalisation vs. Privacy: The Human Factor
Algorithms analyze our behaviors, what we say, how we say it, the patterns in our habits and preferences, and generate outcomes accordingly. Personalisation makes this even more potent. And when you combine that with our deeply human need for connection and validation, we must ask: Are we heading toward a future where privacy and ethics are quietly sacrificed for emotional comfort?
As we navigate this transformation, we have to remember that the knowledge legacy of technology belongs to everyone. No one should hide behind community labels, nor should imposter syndrome hold back the voices of the many talented professionals who are ready to shape the future. The technology and innovation shaping tomorrow’s world is built on the collective contributions of diverse, unique perspectives.
The Future is Now! and We Are All Stakeholders!
In a world where AI can mentor the mentors and lecture the lecturers, the only constant is change. AI won’t just automate tasks, it will reshape industries, economies, and even human relationships. Innovation won’t stop, and it shouldn’t! But we must shape it with care. Because if we don’t, manipulation will shape us, and reshape society in ways we may not easily reverse.
We have a choice: to lead this change with intention, or be led by it. Let’s not forget: we built these systems. And that means we can still choose the kind of relationship we want to have with them. The future of AI is being written right now, by all of us. Let’s make sure it’s a future that empowers, respects, and inspires; one that is ethical, trustworthy, and inclusive for all.