Special guest Alec Crawford, founder of Artificial Intelligence Risk Incorporated (aicforcorporaterisk.com), to discuss the intersection of AI and cybersecurity.
Key Points About the Guest
- Alec Crawford leads AI Risk Inc., which helps companies ensure AI safety, security, and compliance
- His company achieved top ranking for GenAI cybersecurity and regulatory compliance from Waters Technology
- Crawford has an extensive background in AI, including roles at Lord Abbott, Goldman Sachs, and Morgan Stanley
- He has been working with AI since the 1980s, including building neural networks as early as 1987
AI Security Concerns Discussed
Cybercriminals Using AI:
- AI enables more sophisticated phishing emails that are harder to detect
- AI allows hackers to exploit zero-day vulnerabilities within 24 hours (while patching still takes ~2 weeks)
New Attack Vectors:
- "DAN styule attacks" where users trick AI into doing things it wasn't programmed to do
- Jailbreaking attempts to bypass AI safety measures
- Prompt injections that manipulate AI systems
Data Privacy Issues:
- Public AI services (like ChatGPT) own all data uploaded to them
- Even paid personal versions don't provide data ownership (only enterprise versions offer some protection)
- Sensitive information uploaded to public AI can be leaked or used for training
Schedule Your Free Security Assessment
Best Practices Recommended
Use Private AI Solutions:
- Deploy AI models on-premises or within private clouds
- Options include installing Llama locally or using Azure OpenAI within your firewall
- Encrypt sensitive data before it goes to AI systems
Understand Regulatory Landscape:
- Existing regulations (like HIPAA) still apply to AI use cases
- The Colorado AI Act (effective February 2024) applies to companies with Colorado customers
- NIST AI Risk Management Framework provides compliance guidelines
Be Aware of Deepfakes:
- Notable example: $25 million fraud using deepfake technology to impersonate executives
- Financial departments need protocols to verify unusual requests
Warning About AI Systems
The hosts and guest expressed particular concern about DeepSeek, describing it as:
- Significantly less secure than other public AI systems
- Failing standard ethical guidelines (willing to create malware, etc.)
- Having questionable development practices, including potentially training on OpenAI outputs
Audience Takeaway
As Mario summarized: Even paid versions of public AI tools don't guarantee privacy or data ownership. Companies handling sensitive information should implement private AI solutions with appropriate security measures.
Crawford's company provides solutions (ranging from $20-80 per user monthly) focused on helping financial and healthcare organizations use AI safely and in compliance with regulations.