Artificial IntelligenceFinance

FS-ISAC Issues Guidance to Address Risks in Artificial Intelligence in Financial Sectors

574
(source: Kanchanara, Unsplash)

FS-ISAC, a nonprofit organization that promotes cybersecurity and resilience in the global financial system, has published six white papers aimed at helping financial institutions understand the threats, risks, and ethical use cases of artificial intelligence (AI).

The documents offer additional resources that enhance the knowledge of government agencies, academic researchers, and financial services partners. This includes the Financial Services Sector Coordinating Council (FSSCC), Bank Policy Institute (BPI)/(BITS), and the National Institute of Standards and Technology’s (NIST) AI Risk Management Framework.

The six white papers identify the threats and risks associated with AI and provide frameworks and tactics that financial services firms can customise based on their size, needs, and risk appetites:

  • Adversarial AI Frameworks: Taxonomy, Threat Landscape, and Control Frameworks: Defines and maps the existing threats associated with AI and characterises the types of attacks and vulnerabilities this new technology presents to the financial services industry, as well as security controls that can be used to address those risks.
  • Building AI into Cyber Defenses: Highlights financial services’ key considerations and use cases for leveraging AI in cybersecurity and risk technology.
  • Combating Threats and Reducing Risks Posed by AI: Outlines the mitigation approaches necessary to combat the external and internal cyber threats and risks posed by AI.
  • Responsible AI Principles: Examines the principles and practices that ensure the ethical development and deployment of AI in alignment with industry standards.
  • Generative AI Vendor Evaluation and Qualitative Risk Assessment: A customisable tool designed to help financial services organisations assess, select, and make informed decisions about generative AI vendors while managing associated risks.
  • Framework of Acceptable Use Policy for External Generative AI: Guidelines to assist financial services organisations in developing an acceptable use policy when incorporating external generative AI into security programs.

“The multi-faceted nature of AI is both compelling and ever-changing, and the education of the financial services industry on these risks is imperative to the safety of our sector,” said Hiranmayi Palanki, Principal Engineer at American Express and Vice Chair of FS-ISAC’s AI Risk Working Group.

To learn more, read or download the papers here

Written by
Tech Beat Philippines

Tech Beat Philippines is the social media news platform for all things technology. It is also a part of the GEARS section on Daddy's Day Out.

Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Daddy’s Day Out is a platform that celebrates modern masculinity and offers a space where men can unite, learn, and grow together. It fosters a community where authenticity, support, and self-expression thrive unapologetically.

Related Articles

OpenAI’s ChatGPT Search Now Free For Everyone

At last, OpenAI’s ChatGPT Search engine is now free and accessible to...

Grok-2 Goes Public: Free for All X Users, 3x Faster and Smarter AI Chatbot

Elon Musk’s artificial intelligence company xAI finally upgraded its chatbot Grok AI...

Beep Card Payment Option Expands to Shell and Watsons

The Beep Card, once primarily used for bus fares, is expanding its...

Central Bank Encourages E-Cash for Christmas Gift Giving

This Christmas season, the Bangko Sentral ng Pilipinas (BSP) suggests using e-cash instead of physical money for...