Artificial IntelligenceFinance

FS-ISAC Issues Guidance to Address Risks in Artificial Intelligence in Financial Sectors

289
(source: Kanchanara, Unsplash)

FS-ISAC, a nonprofit organization that promotes cybersecurity and resilience in the global financial system, has published six white papers aimed at helping financial institutions understand the threats, risks, and ethical use cases of artificial intelligence (AI).

The documents offer additional resources that enhance the knowledge of government agencies, academic researchers, and financial services partners. This includes the Financial Services Sector Coordinating Council (FSSCC), Bank Policy Institute (BPI)/(BITS), and the National Institute of Standards and Technology’s (NIST) AI Risk Management Framework.

The six white papers identify the threats and risks associated with AI and provide frameworks and tactics that financial services firms can customise based on their size, needs, and risk appetites:

  • Adversarial AI Frameworks: Taxonomy, Threat Landscape, and Control Frameworks: Defines and maps the existing threats associated with AI and characterises the types of attacks and vulnerabilities this new technology presents to the financial services industry, as well as security controls that can be used to address those risks.
  • Building AI into Cyber Defenses: Highlights financial services’ key considerations and use cases for leveraging AI in cybersecurity and risk technology.
  • Combating Threats and Reducing Risks Posed by AI: Outlines the mitigation approaches necessary to combat the external and internal cyber threats and risks posed by AI.
  • Responsible AI Principles: Examines the principles and practices that ensure the ethical development and deployment of AI in alignment with industry standards.
  • Generative AI Vendor Evaluation and Qualitative Risk Assessment: A customisable tool designed to help financial services organisations assess, select, and make informed decisions about generative AI vendors while managing associated risks.
  • Framework of Acceptable Use Policy for External Generative AI: Guidelines to assist financial services organisations in developing an acceptable use policy when incorporating external generative AI into security programs.

“The multi-faceted nature of AI is both compelling and ever-changing, and the education of the financial services industry on these risks is imperative to the safety of our sector,” said Hiranmayi Palanki, Principal Engineer at American Express and Vice Chair of FS-ISAC’s AI Risk Working Group.

To learn more, read or download the papers here

Written by
Tech Beat Philippines

Tech Beat Philippines is the social media news platform for all things technology. It is also a part of the GEARS section on Daddy's Day Out.

Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Daddy’s Day Out is a platform that celebrates modern masculinity and offers a space where men can unite, learn, and grow together. It fosters a community where authenticity, support, and self-expression thrive unapologetically.

Related Articles

Online Sellers Now Required to Pay Withholding Tax, Says BIR

The Bureau of Internal Revenue (BIR) announced that digital marketplace operators, such...

DOST Provides Funding to Packworks for Enhancing Sari-sari Stores with AI

Sari-sari stores will become smarter and more sophisticated with the latest partnership...

CoinEx Charity Advances Blockchain Education in the Philippines at Campus Conference

CoinEx Charity sponsored and took part in the Blockchain Campus Conference in...

Pangilinan on Maya IPO: GCash Still Holds the Throne in Philippine Digital Wallet Market

PLDT chairman Manuel V. Pangilinan admitted that PLDT’s Maya has a long...