DiscoverAI CyberSecurity Podcast
AI CyberSecurity Podcast
Claim Ownership

AI CyberSecurity Podcast

Author: Kaizenteq Team

Subscribed: 14Played: 47
Share

Description

AI Cybersecurity simplified for CISOs and CyberSecurity Professionals.
11 Episodes
Reverse
Key AI Security takeaways from RSA Conference 2024, BSides SF 2024 and all the fringe activities that happen in SF during that week. Caleb and Ashish were speakers, panelists, participating in several events during that week and this episode captures all the highlights from all the conversations they had and they trends they saw during what they dubbed the "Cybersecurity Fringe Festival” in SF. Questions asked: (00:00) Introduction (02:53) Caleb’s Keynote at BSides SF (05:14) Clint Gibler’s Bsides SF Talk (06:28) What are BSides Conferences? (13:55) Cybersecurity Fringe Festival (17:47) RSAC 2024 was busy (19:05) AI Security at RSAC 2024 (23:03) RSAC Innovation Sandbox (27:41) CSA AI Summit (28:43) Interesting AI Talks at RSAC (30:35) AI conversations at RSAC (32:32) AI Native Security (33:02) Data Leakage in AI Security (30:35) Is AI Security all that different? (39:26) How to filter vendors selling AI Solutions?
How can AI change a Security Analyst's workflow? Ashish and Caleb caught up with Ely Kahn, VP of Product at SentinelOne, to discuss the revolutionary impact of generative AI on cybersecurity. Ely spoke about the challenges and solutions in integrating AI into cybersecurity operations, highlighting how can simplify complex processes and empowering junior to mid-tier analysts. Questions asked: (00:00) Introduction (03:27) A bit about Ely Kahn (04:29) Current State of AI in Cybersecurity (06:45) How AI could impact Cybersecurity User Workflow? (08:37) What are some of the concerns with such a model? (14:22) How does it compare to a analyst not using this model? (21:41) Whats stopping models for going into autopilot? (30:14) The reasoning for using multiple LLMs (34:24) ChatGPT vs Anthropic vs Mistral You can discover more about SentinelOne's Purple AI here!
How is AI transforming traditional approaches to offensive security, pentesting, security posture management, security assessment, and even code security? Caleb and Ashish spoke to Rob Ragan, Principal Technology Strategist at Bishop Fox about how AI is being implemented in the world of offensive security and what the right way is to threat model an LLM. Questions asked: (00:00) Introductions (02:12) A bit about Rob Ragan (03:33) AI in Security Assessment and Pentesting (09:15) How is AI impacting pentesting? (14:50 )Where to start with AI implementation in offensive Security? (18:19) AI and Static Code Analysis (21:57) Key components of LLM pentesting (24:37) Testing whats inside a functional model? (29:37) Whats the right way to threat model an LLM? (33:52) Current State of Security Frameworks for LLMs (43:04) Is AI changing how Red Teamers operate? (44:46) A bit about Claude 3 (52:23) Where can you connect with Rob Resources spoken about in this episode: https://www.pentestmuse.ai/ https://github.com/AbstractEngine/pentest-muse-cli https://docs.garak.ai/garak/ https://github.com/Azure/PyRIT https://bishopfox.github.io/llm-testing-findings/ https://www.microsoft.com/en-us/research/project/autogen/
What is the current reality for AI automation in Cybersecurity? Caleb and Ashish spoke to Edward Wu, founder and CEO of Dropzone AI about the current capabilities and limitations of AI technologies, particularly large language models (LLMs), in the cybersecurity domain. From the challenges of achieving true automation to the nuanced process of training AI systems for cyber defense, Edward, Caleb and Ashish shared their insights into the complexities of implementing AI and the importance of precision in AI prompt engineering, the critical role of reference data in AI performance, and how cybersecurity professionals can leverage AI to amplify their defense capabilities without expanding their teams. Questions asked: (00:00) Introduction (05:22) A bit about Edward Wu (08:31) What is a LLM? (11:36) Why have we not seen entreprise ready automation in cybersecurity? (14:37) Distilling the AI noise in the vendor landscape (18:02) Solving challenges with using AI in enterprise internally (21:35) How to deal with GenAI Hallucinations? (27:03) Protecting customer data from a RAG perspective (29:12) Protecting your own data from being used to train models (34:47) What skillset is required in team to build own cybersecurity LLMs? (38:50) Learn how to prompt engineer effectively
There is a complex interplay between innovation and security in the age of GenAI. As the digital landscape evolves at an unprecedented pace, Daniel, Caleb and Ashish share their insights on the challenges and opportunities that come with integrating AI into cybersecurity strategies Caleb challenges the current trajectory of safety mechanisms in technology and how overregulation may inhibit innovation and the advancement of AI's capabilities. Daniel Miessler, on the other hand, emphasizes the necessity of accepting technological inevitabilities and adapting to live in a world shaped by AI. Together, they explore the potential overreach in AI safety measures and discuss how companies can navigate the fine line between fostering innovation and ensuring security. Questions asked: (00:00) Introduction (03:19) Maintaining Balance of Innovation and Security (06:21) Uncensored LLM Models (09:32) Key Considerations for Internal LLM Models (12:23) Balance between Security and Innovation with GenAI (16:03) Enterprise risk with GenAI (25:53) How to address enterprise risk with GenAI? (28:12) Threat Modelling LLM Models
What does AI mean for Cybersecurity in 2024? Caleb and Ashish sat down with Daniel Miessler. This episode is a must listen for CISOs and cybersecurity practitioners exploring AI's potential and pitfalls. From the intricacies of Large Language Models (LLM) and API security to the nuances of data protection, Ashish, Caleb and Daniel unpack the most pressing threats and opportunities facing the cybersecurity landscape in 2024. Questions asked: (00:00) Introduction (06:06) A bit about Daniel Miessler (06:23) Current State of Artificial General Intelligence (13:57) What going to change in security with AI? (16:40) AI’s role in spear phishing (19:10) AI’s role in Recon (21:08) Where to start with AI Security? (26:48) AI focused cybersecurity startups (31:12) Security Challenges with self hosted LLMs (39:34) Are the models becoming too restrictive Resources spoken about during the episode: Unsupervised Learning
AI Security using LLM, AI Agents & more can be used to innovate cyber security practices. In this episode Ashish and Caleb sit down to chat about the nuances of creating custom AI agents, the implications of prompt engineering, and the innovative uses of AI in detecting and preventing security threats. From discussing the complexity of Data Loss Prevention (DLP) in today's world to debating the realistic timeline for the advent of Artificial General Intelligence (AGI). Questions asked: (00:26) The impact of GenAI on Workforce (04:11) Understanding Artificial General Intelligence (05:57) Using Custom Agents in OpenAI (09:37) Exploring Custom AI Agents: Definition and Uses (12:08) Security Concerns with Custom AI Agents (14:32) AI's Role in Data Protection (18:41) AI’s Role in API Security (20:56) Complexity of Data Protection with AI (25:42) Protecting Against Prompt Injections in AI Systems (27:53) Prompt Engineering and Penetration Testing (31:16) Risks of Prompt Engineering in AI Security (37:03) What's Hot in AI Security and Innovation?
How to efficiently secure, scale and deploy LLMs in an Enterprise? Kicking off 2024 with the final instalment of our AI Cybersecurity Primer. In this episode Caleb and Ashish talk about large language models (LLMs), their deployment in enterprise settings, and the nuances of their operation. They explore the challenges and opportunities in ensuring the security of these systems, emphasising the importance of cybersecurity measures in the evolving landscape of AI. Questions asked: (00:00) Introduction (02:23) Deployment of LLM System (07:13) Deployment in an Enterprise (12:01) Threats with LLMs (15:30) Protecting Data (18:17) LLMs and Compliance (19:51) LLM Control Plane (26:36) Whats hot in AI! (36:57) Vendor risk assessment If you found this episode valuable, you can catch Part-1 & Part 2 of the AI Primer Series here - If you have any questions about AI & it's security please drop that as a comment or reach out to us on info@kaizenteq.com #aicybersecurity #largelanguagemodels #ai
You cant protect what you don't understand. We are continuing Part 2 of our AI Primer on the AI Cybersecurity Podcast to understand what role AI will play in the world of cybersecurity. In this episde, Caleb and Ashish are levelling up the playing field, talking all things LLMs (Large Language Models), GenAI and laying the foundations with AI primers for cybersecurity in the season 1 of AI CyberSecurity Podcast. Questions asked: (00:00) Introduction (02:34) Evolution of LLM and GenAI (09:20) How does LLM work? (17:15) Differentiating between LLMs (22:05) The cost of running LLMs (23:43) Deploying an LLM (26:10 Big Companies vs Startups (32:21) Whats hot in AI this week! If you found this episode valuable, listen to Part-1 of the AI Primer Series ! If you have any questions about AI & it's security please drop that as a comment or reach out to us on info@kaizenteq.com
To understand what role AI will play in the world of cybersecurity, it important to understand the technology behind it. Caleb and Ashish are levelling up the playing field and laying the foundations with AI primers for cybersecurity in the season 1 of AI CyberSecurity Podcast. What was discussed: (00:00) Introduction (02:36) Learning about AI/ML (08:00) Acronyms of AI (10:49) AGI - Artificial General Intelligence (11:29) Three states of AGI (13:48) AI/ML in Security Products (17:03) Different kinds of learning (21:51) Whats hot in the AI Section!!
Ashish Rajan and Caleb Sima, who have been Cybersecurity practitioners and CISOs for over a decade, are combining forces to bring to you how CyberSecurity can be applied to AI without FUD. Each episode discuss a AI Theme and What's Hot in AI. You can expect the episodes on your favorite Podcast Player every two weeks. This is a Audio & Video podcast so you can find video of each episode on AI CyberSecurity Podcast YouTube Channel If you have any AI & CyberSecurity queries or topics you would like us to cover, please reach out to us on info@kaizenteq.com You can also check out our sister podcast - Cloud Security Podcast for all your cloud and cloud native security topics.
Comments 
Download from Google Play
Download from App Store