Cyber security awareness month: how to use AI securely

Artificial intelligence (AI) is transforming how we work through automating operations, improving decision-making and providing new efficiencies such as easily reading documents, summarising emails and analysing data quicker. But as AI tools become more used in daily workflows, they also introduce new cyber security risks.

Without proper oversight, AI can become a gateway for data breaches, compliance violations and reputational damage.

For this edition of our cyber security awareness month guide, we will explore how to use AI securely and responsibly in the workplace.

Before we begin, a question to get you thinking: can you name an example of where you’ve used AI recently in work, did you think about the security before you opened the AI tool or did you just go ahead and input data without evaluating where your information is going?

The promise of AI in the workplace

AI tools are becoming more used within the workplace, as confidence to use them grow and more support on how to use them strategically are being provided by AI experts, more businesses are improving productivity and operational efficiency through AI.

However, the key to success lies in governance and control. Companies are investing more into their secure infrastructure, data protection protocols and employee training to ensure that AI tools are used responsibly. This care of adoption demonstrates that AI is not just a tech upgrade but a strategic asset that must be managed with care.

The risks of unverified AI tools

On the flip side, using AI tools without the correct vetting can lead to serious consequences.

In 2023, Samsung experienced a data leak when employees uploaded confidential code to ChatGBT, inadvertently exposing sensitive information.

This incident led to a company-wide ban on generative AI tools, highlighting the dangers of using consumer-grade platforms for business-critical tasks.

Another example is Air Canada, where a customer manipulated the airline’s AI chatbot to secure an excessive refund, revealing vulnerabilities in chatbot governance. These cases underscore the importance of understanding how AI tools process and store data, and why businesses must implement strict usage policies. Without these added layers of protection, AI can become a liability rather than an asset.

Sophos report: AI’s double-edged impact on cyber security

According to Sophos’ 2025 report, Beyond the hype: The business reality of AI for cybersecurity AI is having a two-pronged effect on cyber security. On one hand, AI-powered security tools are helping teams respond to threats faster and more effectively. On the other hand, the rise of shadow AI (unauthorised AI tools used by employees) is complicating cyber security efforts and increasing risk.

Sophos warns that cyber security awareness must now extend beyond phishing and malware to include how employees interact with AI tools. As Aaron Bugal, Field CISO at Sophos explains:

“We’re witnessing a new era where security awareness must extend beyond phishing emails to include how people use and share sensitive data through AI tools. Governance and clear boundaries around AI usage are essential.”

This insight further reinforces the need for strong awareness, education and policy enforcement when using AI throughout your workday.

Best practices for using AI securely at work

Vet AI tools thoroughly

Before you look to adopt any AI tool, conduct a thorough security assessment. Look for certifications like ISO 27001 or SOC 2, which indicate that the provider follows industry-standard security practices. Review the tool’s privacy policy to understand how it handles data and ensure that it complies with regulations (such as GDPR, HIPAA or other relevant frameworks such as Cyber Essentials).

Don’t rely solely on marketing claims, consult your IT partner to evaluate the tool’s architecture and risk profile. A secure AI tool should offer transparency, data encryption and user access controls. By vetting tools properly, you reduce the risk of introducing vulnerabilities into your organisation.

Avoid inputting sensitive data

AI tools often rely on user input to generate responses, which holds a major risk if sensitive data involved is breached. Never input confidential business information, customer records, financial data or login credentials into AI platforms unless they are enterprise-approved and secure. Even seemingly harmless queries can reveal patterns or proprietary insights if aggregated over time.

Educate employees on what constitutes sensitive data and why it should be protected. Encourage the use of anonymised or generic inputs when testing AI tools. Remember, once data is entered into an AI system (especially cloud-based ones) you may lose control over how it’s stored or used.

Use enterprise-grade AI platforms

Consumer-grade AI tools may be convenient, but they often lack the security features needed for business use. Instead, opt for enterprise-grade platforms that offer admin controls, audit logs, data residency options and user management features. These tools are designed with business needs in mind and provide greater visibility into how data is processed.

Enterprise solutions also allow for integration with existing security infrastructure, such as identity management systems and endpoint protection. This ensures that AI usage is aligned with your organisation’s broader cybersecurity strategy. Investing in secure platforms is not just a technical decision, it’s a business imperative.

Monitor for shadow AI

Shadow AI refers to the use of unauthorised AI tools by employees, often without the knowledge of IT or security teams. This can happen when staff use personal accounts or free tools to complete tasks, bypassing official channels. Shadow AI introduces unknown risks, including data leakage, compliance violations and exposure to malicious code.

To combat this, implement monitoring systems that detect unsanctioned AI usage across your network. Use endpoint detection tools and cloud access security brokers (CASBs) to flag unusual activity. Most importantly, foster a culture of transparency where employees feel comfortable discussing their AI needs and seeking approval for new tools.

Establish an AI governance policy

A clear AI governance policy is essential for managing risk. This policy should define what types of AI tools are allowed, how they should be used, and what data can be shared. Include guidelines on data classification, access controls, acceptable use, and incident reporting.

Regularly review and update the policy to reflect changes in technology and threat landscapes. Involve stakeholders from IT, legal, HR, and operations to ensure the policy is practical and enforceable. By setting clear boundaries, you empower employees to innovate responsibly while protecting your organisation’s assets.

Train employees on AI security

Technology alone isn’t enough. Your staff need to understand how to use AI securely. Provide training on prompt safety, data handling and the risks of tool misuse. Use real-world examples to illustrate how small mistakes can lead to big consequences, such as data leaks or reputational damage.

Encourage employees to ask questions and report suspicious activity. Make AI security part of your broader cyber security awareness program, and tailor training to different roles and departments. When employees are informed and engaged, they become your first line of defense against AI-related threats.

This cyber security month, and every month, it’s important to evaluate how your business is using AI as without proper safeguards in place, it can also reshape your risk landscape. Take the time to review your organisation’s AI practices, educate your teams and build a secure foundation for innovation.

Security isn’t just about firewalls anymore, it’s about smart choices, responsible innovation and shared accountability.

Contact us today or book a meeting via our meeting link for your AI review to learn more about the opportunities a secure AI adoption framework can provide you.

Explore our resources to see how we’ve supported businesses across the UK with disaster recovery.

Speak to an IT Specialist

To find out more or to talk to one of our experts, contact us today.