Responsible AI and third-party risk management: what you need to know
As AI rapidly integrates into the core of organisational processes, many companies may not fully recognise the extent of its use by their vendors and partners.
Organisations are leveraging AI across various activities to boost performance and streamline decision-making, including conducting data analysis, enhancing functionalities within cloud platforms and SaaS tools, personalising marketing efforts, deploying customer support chatbots and detecting fraud.
Consequently, the quality of services and delivery can suffer if AI systems are inadequately implemented or misunderstood. Furthermore, the implications of AI use or misuse are profound, encompassing ethical issues, security vulnerabilities, reputational risks and potential legal ramifications.
A few examples include:
- GenAI hallucinations in professional services. Several press articles and/or lawsuits have revealed instances where case citations or references were entirely fabricated by GenAI.
- Sensitive data exposure. Inadequate data anonymisation to Google and the University of Chicago Medical Centre facing a lawsuit accusing them of sharing patient records with AI teams, with potential re-identification risks.
- Automated decision-making bias. In 2019, the credit card algorithm from Apple and Goldman Sachs was scrutinised for allegedly discriminating against women by providing lower credit limits despite similar financial profiles as their male counterparts.
- Chatbot failures. Air Canada’s customer service chatbot misinterpreted refund policies, granting an unauthorised refund. A tribunal ruled against Air Canada, holding the company liable for the chatbot’s output.
As a result, organisations must be vigilant in identifying, assessing and managing the risks associated with AI technologies, ensuring that they address any errors or biases in AI-driven processes, uphold ethical standards, protect sensitive information and comply with regulatory requirements.
[....]