SAI360 announces its newest release. Learn More!
  • Home
  • Blog
  • Six Months Out: Cybersecurity Expert Robert Bond’s Predictions on Generative AI Regulations and Risks

Six Months Out: Cybersecurity Expert Robert Bond’s Predictions on Generative AI Regulations and Risks

As artificial intelligence (AI) continues to advance, regulatory frameworks are evolving in the blink of an eye across the globe. Both Europe and Canada, for example, are arguably at the forefront of shaping AI governance, each focusing on addressing AI’s greatest ethical risks.

AI Regulations and Risks

For example, consider the EU AI Act, which is set to roll out fully in the coming months. This is designed to foster trust in AI technologies by regulating them based on their risk levels—low, limited, high risk or unacceptable risk.

High-risk applications—like those used in identity verification—will be under intense scrutiny. The EU AI Act also focuses on transparency, safety, and data governance for high-risk AI systems.

Meanwhile, Canadas Directive on Automated Decision-Making (ADM) governs AI in federal government decision-making, mandating transparency, accountability, and impact assessments for fairness. With the impending AIDA Act in Canada, businesses in Canada should expect upcoming regulations to emphasize data privacy, security, and ethical AI use, particularly for industries handling sensitive information like healthcare, legal services, and finance.

Whats Next for Businesses?

In the next six months, industries must be prepared to comply with these emerging AI regulations. This means navigating a complex landscape of legal, ethical, and operational challenges. This is particularly true for those industries that rely heavily on trust and validation, such as the notary profession.

Here, the use of AI to authenticate signatures or verify identities introduces risks of fraud, highlighting the need for comprehensive validation processes and internal policies.

In the meantime, generative AI—used for everything from chatbots to contract analysis to document drafting—will face tighter regulation. Why? Things like concerns over data privacy, intellectual property, and confidentiality will gain traction. For instance, AI systems that analyze contracts or facilitate due diligence could unintentionally expose sensitive client information, underscoring the importance of risk assessments. In many cases, not knowing what you dont know could prove problematic.

Risk Assessments and Policies: What Industries Must Do Next

Businesses must be proactive in implementing AI risk assessments to evaluate the legal, ethical, and operational impacts of generative AI. Key questions to address include:

  • Why is AI being used? 
  • Who provides the technology? 
  • How does it comply with legal requirements? 
  • What are the impacts on human rights, intellectual property, and client confidentiality? 

For high-risk industries (again, like notary services), where validating identities and signatures is crucial, these assessments should identify vulnerabilities, such as the risk of AI-generated signatures being fraudulent.

Developing a comprehensive AI policy is also essential. This will help establish clear guidelines on the use of AI, including data protection, confidentiality, and compliance with local and international regulations.

Building AI Governance Structures: Policies and Training

To minimize risks and ensure compliance, businesses must establish robust AI governance structures. A well-defined AI policy should address the benefits and limitations of AI tools while setting rules for their use. It should cover critical areas such as data privacy, intellectual property, and client confidentiality. This way, AI use aligns with company values and legal obligations.

Ongoing training is also critical. Staff, especially in high-risk areas like legal services, should be trained in responsible AI use. This training would be smart to emphasize human oversight in decision-making processes involving AI-generated data. This combination of policies and training helps businesses reduce risks and maintain trust.

Final Thoughts

As the AI regulatory landscape evolves, businesses must proactively adapt to new laws and demonstrate their commitment to responsible AI use.

Over the next six months, from Canada to the EU, and everywhere, its about implementing strong AI governance frameworks, conducting risk assessments, and developing policies to address the ethical and operational challenges posed by generative AI and chatbots.

By taking these steps, businesses can minimize risks, ensure compliance, and build a foundation for ethical and responsible AI deployment, setting the stage for long-term success. In short, its about the right preparation today for better peace of mind tomorrow.

Robert Bond, BA, CCEP, CITP, FSALS, CompBCS | Industry Commissioner, Data & Marketing Commission | Senior Counsel & Notary Public, The Privacy Partnership Limited

Lets Start a Conversation

Schedule a virtual coffee with a team member:

Sources:

https://www.sai360.com/resources/grc/what-to-know-about-the-eu-ai-act-effective-august-2024-blog

https://www.heise.de/en/news/ChatGPT-s-power-consumption-ten-times-more-than-Google-s-9852327.html#:~:text=A%20single%20request%20to%20the,just%20to%20answer%20user%20queries.

Keep Reading