Choice Takes on Money20/20 2023
With Growth Comes Regulation
Over three and a half days, compliance and regulation were the focus of discussion.
Fintech companies operate at the intersection of finance and technology, leveraging cutting-edge solutions to enhance and streamline financial services. However, with innovation comes responsibility, and fintech companies must operate within established regulations to ensure consumer protection, data security, and overall financial stability.
The financial technology industry is growing exponentially, with projections that revenue will increase sixfold from $245 billion to $1.5 trillion by 2030. This growth is drawing increased expectations from regulators. Fintechs welcome greater clarity from their bank partners and regulators on expectations and compliance requirements. Regulators are pushing for more guardrails.
“To sit in a room of bank presidents and CEOs and hear them talking about regulatory compliance reviews of fintech partners as one of their biggest issues was downright amazing for me.
Having been in the Compliance space for almost 20 years, this is not something that happens often. It was great to hear that our focus at Choice is also the focus of our peer banks.”
– Michelle Bass, Fintech Regulatory Compliance Oversight Manager
Here’s what we gathered for best practices moving forward:
- Be relentless about your focus on regulations.
- Be proactive in talking to regulators.
- Listen to regulators. Collectively, we must all do our part to move the needle forward.
- Document and follow through. Do what you say you will and build trust with your customers and regulators.
What AI can do and what AI can’t do.
Artificial Intelligence (AI) interfaces are a great tool for getting projects started, generating ideas, or even identifying irregularities in data. However, it should not be used for replacing human touch or as the first and final draft of a piece of collateral. While it is a great tool, it should simply be one in your arsenal of people power.
There was also a general sense of the need to clarify the ethical principles underlying the use of AI. AI ethics refers to the principles and guidelines that govern the development, deployment, and use of AI systems in ways that are fair, transparent, accountable, and that respect fundamental human values. As AI continues to advance and integrate into various aspects of our lives, addressing ethical considerations becomes increasingly crucial. Some notable considerations include:
- Transparency: Transparency in AI means making AI systems’ decision-making processes understandable and explainable. This is important in applications where AI influences significant decisions, such as hiring, lending, or criminal justice. Understanding how an AI interface reaches its conclusions is crucial for accountability.
- Privacy: AI often processes vast amounts of personal data. Ethical AI practices involve respecting individuals’ privacy rights and implementing robust measures to protect sensitive information. Striking a balance between data utility and privacy is a key challenge in AI development.
- Security: Ensuring the security of AI systems is paramount. Safeguards must be in place to prevent malicious actors from exploiting vulnerabilities in AI algorithms, which could have severe consequences, especially in critical applications such as autonomous vehicles and healthcare.
- Human Agency and Autonomy: Ethical AI recognizes the importance of preserving human agency and autonomy. AI systems should complement human decision-making rather than replacing it entirely. Humans should be able to understand, challenge, and override AI decisions when necessary.
- Continuous Learning and Adaptation: AI systems are dynamic and can adapt based on new data. Ethical AI requires continuous monitoring, learning, and adaptation to address emerging challenges and evolving ethical considerations.
Want to learn more about our team’s experience at Money20/20? We’d love to get in touch!
