Advertise With Us

Building Awareness on the Implications of AI Usage in Organizations

Artificial Intelligence (AI) has become an integral part of modern organizational operations, promising efficiency and innovation. However, with great power comes great responsibility, and the rise of AI in workplaces has introduced significant security concerns that need to be addressed. In this article, we will explore the critical need for awareness about the implications of AI usage, both within organizations and in personal use, emphasizing the importance of open discussions and incorporating AI pitfalls into security awareness programs.

Data Leakage Beyond Public AI Tools

Data leakage is a multifaceted problem exacerbated by AI. It’s not just about posting sensitive information on public AI platforms. For example, an employee might input proprietary data into an AI tool to get insights, not realizing that this action could lead to unauthorized access or misuse. However, even when organizations implement internal AI tools, doesn’t completely solve the problem.
One of the basic principles of information security is maintaining information confidentiality. Part of that is implementing access controls including Need-to-Know controls. Internal AI modules do not currently include need-to-know restrictions, meaning that if a data analyst and a CEO ask the AI tool a question, they will get the same data. The biggest issue with that is the lack of understanding some of the employees may have on the implications of sharing data received. In the M&A world, a premature publication of a potential M&A may become the kiss of death for the project. People who are read in to the potential transaction are well aware of it. However, if people who are not in that inner-circle learn about it because of AI indiscretions, they may not understand how sharing that data can have catastrophic consequences.

Blind Trust in AI Responses

One of the most pressing issues is the blind trust users place in data produced by AI. AI tools are designed to simulate human-like responses, often leading users to accept these outputs without question. This uncritical acceptance can result in flawed decision-making.
AI bias and misinformation is one element, however, even when AI gives us correct answers, we should not blindly follow them. AI doesn’t invent new ideas, it’s not creative, it simply repeats things that were given to it. For example if company A is using AI to generate a marketing plan. The Company A team is challenging the AI tool again and again, perfecting the plan to its needs. Later on company B asks AI to generate  a marketing plan. They receive a great output, based on company A’s reiterations. However, if they blindly follow the recipe, they will still probably be a few steps behind company A, and once company A releases its campaign, company B will realize they have very little to show for. 

The Rise of Shadow AI

Shadow IT—where employees use unauthorized applications and devices—is a well-known challenge. Now, we face a new frontier: shadow AI.
Take for example the use of AI companions during virtual calls. Many people are now relying on these tools to summarize meetings and generate clear action items. Un-authorized use of such tools by employees, contractors and other third parties, potentially leads to data being sent to uncontrolled accounts. This practice poses a significant security risk, as sensitive organizational data could be exposed through unmonitored channels. Organizations must recognize this threat and implement measures to detect and mitigate shadow AI usage.

Building a Culture of AI Awareness

Trying to stop use of AI is, in my opinion, similar to taking a horse and buggy to work. We cannot and should not stop innovation, meaning that we have to foster a culture of awareness and vigilance which will allow our organizations to embrace innovation while protecting the organization.

The key to embracing safe AI usage is awareness. Incorporate AI pitfalls into existing security awareness programs. Educate employees about the potential risks of blindly trusting AI responses and the implications of sharing data with AI tools. Make sure that people exposed to sensitive data understand AI risks pertaining to use and disclosure of such information. And most importantly, encourage open dialogues about AI usage within the organization. Create forums where employees can share their experiences, concerns, needs and best practices regarding AI tools.