Privacy and Ethical Concerns Around AI

Risks of, and principles to guide, AI development

Camden Woollven is GRC International Group’s (IT Governance USA’s parent company) head of AI. She’s responsible for identifying AI technologies that can improve the Group’s productivity and efficiency, and implementing them.

As our AI subject-matter expert, her goal is to lead a cultural shift toward the adoption of AI technologies. She also aims to promote a mindset of continual learning and innovation as we develop AI-related competencies and capabilities.

We sat down to chat with her about AI ethics and privacy concerns.


In this interview

  • Ethical principles for guiding AI development
  • How those principles relate to data privacy
  • High-risk domains, such as health care
  • Why AI ethics requires a team effort

What ethical principles should guide AI development?

For a start, you need to be totally open about how AI works, especially when it comes to high-stakes situations that impact people’s lives – in health care, for example.

Key principles include:

  • Fairness
  • Transparency
  • Accountability
  • Making sure AI benefits humanity

Let’s go through each of those principles individually. First, how can we ensure more fairness?

A big part of it is tackling bias. This is super important.

You’ve got to make sure you’re being fair by using data that represents everyone, and creating algorithms that don’t discriminate against anyone. That means being diligent about testing for bias and discrimination throughout the entire AI development process.

From the initial data collection and cleaning, to model training and evaluation, fairness needs to be a key metric. That means you must constantly assess things like:

  • Are certain groups being treated differently by the AI for no justifiable reason?
  • Is the model performance consistent across different demographics?

Catching and mitigating unfair bias early and often is crucial.

I suppose that, to provide assurance of fairness, you need transparency. How can we foster better transparency in AI systems?

Transparency starts with being upfront about what data is used to train the AI system, and where that data comes from.

That means things like:

You can’t just feed personal data – from real, living individuals – into an opaque system without their knowledge.

But it’s not just about the data – you must also be able to explain, in plain language, how AI makes decisions.

If AI is making decisions that impact a person’s life, they have a right to know how and why. This is why it’s so important to build explainable AI and tools to translate complex algorithms.

What about accountability – how can we ensure this in AI systems?

Accountability means there needs to be clear responsibility when AI causes problems or harm. Organizations that deploy it need to be held responsible for their AI systems.

This is no different to holding organizations responsible for actions by their human employees.

High-stakes decisions need human oversight, regular audits, and clear avenues for recourse. To hold AI developers accountable, we also need:

  • Robust governance frameworks
  • Robust industry standards
  • Oversight bodies

Policymakers have a role to play in creating enforceable rules around AI transparency, accountability, and fairness to protect people’s rights.

The bottom line is that we need clear paths to accountability, whether that’s:

  • Humans double-checking important decisions by AI
  • Regularly testing and auditing your AI systems
  • Solid plans for when things go wrong

How can we make sure AI benefits humanity, and that even advanced systems remain aligned with human values?

We’ve got to start by getting crystal clear as a society on what core values we expect AI to stick to.

Then, we need to hard-code those values into AI systems from the ground up.

That means:

  • Again, rigorous testing and auditing – are the systems behaving how we want them to?
  • Constantly monitoring the systems for any misaligned behavior
  • Remaining on standby to course-correct at short notice

We can’t just let AI run wild – especially when the stakes are high. As a recent example in the news, the US urged China and Russia to declare that only humans, and not AI, will control nuclear weapons.

But even in situations that aren’t a matter of life and death, humans must stay in the loop and have meaningful oversight.

AI should be a tool to enhance, not replace, human decision-making.

How do these principles link or relate to data privacy?

The way generative AI systems work is that they’re essentially fed huge amounts of data scraped from the web. That can include personal data posted online, where the poster was unaware that it might be used to train AI systems. As such, sensitive or personal information could end up in the AI’s outputs.

On top of that, the queries people search for via search engines can reveal a lot about them:

  • Interests
  • Personal lives
  • In some cases, identity

If that query data is also used to train AI models, that’s another potential privacy issue.

It comes down to transparency – again, you can’t just feed personal information into an opaque system without the person’s knowledge.

Organizations integrating AI therefore need to be clear and upfront about what data they’re collecting and feeding into the AI system, and give people the ability to opt out. They must also implement robust safeguards to prevent the AI from regurgitating personal data in its responses.

Earlier, you mentioned health care. This is a domain that inherently involves a lot of sensitive data and life-and-death situations. What must we consider here when it comes to AI usage?

As you say, people’s lives can be in danger, so those AI systems must be safe, accurate, and unbiased. If an AI has harmful biases, like working well for some patients but not others just because of race or skin color, it could lead to some bad outcomes.

In health care, AI has the potential to be used for diagnosing diseases, recommending treatments, and even assisting in surgeries. Once again, the stakes are high, and we have a ton of ethical challenges to deal with.

In what other domains do we need to take extra care when deploying AI?

Criminal justice is another good example. We must be extremely careful not to amplify existing biases or create new inequities. An AI that incorrectly flags certain groups as high-risk, or recommends harsher sentences based on biased data, would be a serious injustice.

In domains like health care and criminal justice, transparency is critical. We need to be able to audit and understand exactly how AI is making decisions that profoundly impact people’s lives.

Do you have any final words of advice?

Shaping AI ethics is a team effort – every stakeholder has an important role to play:

  • Developers need to bake ethics in from the get-go, always thinking about potential pitfalls and working to avoid them.
  • Organizations using AI must make ethics a top priority – not just pay it lip service. They need clear principles and ways to hold themselves accountable.
  • Policymakers should be working on smart regulations that both encourage innovation and protect the public.
  • Academics and advocacy groups can keep the ball rolling by pushing for more research, and making sure industry and governments are held accountable.

Ultimately, AI should be developed to improve people’s lives, not showcase impressive technology. We need all parties to work together to hash things out while making sure we stay on the right track.


Want to teach staff to adopt AI best practices in the workplace?

Secure your organization’s future with our AI awareness training.

  • Navigate the complex landscape of AI with ease, ensuring your organization remains compliant and your data stays private
  • Understand AI’s historical evolution and its role in modern business
  • Stay ahead of the curve by mastering AI’s legal implications and practical uses in your industry

We hope you enjoyed this edition of our ‘Expert Insight’ series. We’ll be back soon, chatting to another expert within GRC International Group.

In the meantime, why not check out our interview with privacy consultant Mark James on deploying AI systems in compliance with data protection laws?

If you’d like to get our latest interviews and resources straight to your inbox, subscribe to our free Security Spotlight newsletter. Alternatively, explore our full index of interviews here.