• tl;dr sec
  • Posts
  • TL;DR: Every AI Talk from BSidesLV, Black Hat, and DEF CON 2024

TL;DR: Every AI Talk from BSidesLV, Black Hat, and DEF CON 2024

Get up to speed on >60 AI talks in 15 minutes

AI and cybersecurity are moving very quickly these days.

I don’t know about you, but it can feel pretty hard to keep up with.

This past Hacker Summer Camp (BSidesLV, Black Hat USA, DEF CON), there were over 60 talks related to AI.

I wanted to know what everyone is working on.

So I spent tens of hours so you can get up to speed in ~15 minutes.

Post Overview

This post is a bit meaty so let me give you a quick overview here to help you navigate.

In This Post

  1. Short Summaries - I’ll summarize the abstracts of each of the >60 talks in 1 sentence each, grouped by Category (e.g. AppSec) and Subcategory (e.g. Code Analysis).

  2. Talk Categories - I define what I mean by each Category/Tag I’m bucketing talks into, which are then used in the Supporting Posts described below.

  3. Longer Summaries - A longer summary for each talk, where I tried to capture as much of the unique info from the abstracts as possible in ~2-3 sentences.

Supporting Posts

It can be a bit annoying trying to navigate the full conference proceedings for three different conferences, across many days, and in the case of DEF CON, across the Main Track and >10 different villages.

So I gathered all of the talk titles, their abstracts, and put them onto 2 pages you can simply scroll down to read their full details without all the clicking.

I also gathered the social media profiles of all speakers, and links to each talk’s:

  • 📺️ Video recording

  • 🖥️ Slides

  • 📖 Paper

  • 🛠️ Tool

Where possible. (Not that many of these are not published yet, though some authors sent me their relevant links.)

Each talk is also labeled with a #category so you can CTRL+F for topics of interest, or if one of the summaries from this page strikes your fancy.

Here are the posts listing all of the abstracts and supporting links:

Alright, let’s get into it!

Table of Contents

Short Summaries (~1 sentence)

This section attempts to summarize every talk in ~1 sentence, though sometimes it’s a long sentence 😅 

Within each Category, I’ve grouped similar talks together into a Subcategory.

Securing AI

Lessons Learned from Builders
Richard Harang shared lessons learned from NVIDIA’s AI red team from securing dozens of LLM-powered applications (🖥️ Slides), as did Sage's Andra Lezza and Javan Rasokat in their AppSec Village talk.

HiddenLayer’s Chloé Messdaghi gave a 101 security for AI basics talk.

Knowledge Bases and Benchmarks
MITRE's Christina Liaghati discussed MITRE ATLAS, a public knowledge base of AI adversary tactics and techniques based on real-world attacks.

Meta's Cyrus Nikolaidis and Faizan Ahmad presented CyberSecEval Prompt Injection benchmarks and PromptGuard, a model for detecting both direct and indirect injection attacks.

Workforce
An AI Village panel discussed the challenges and opportunities in developing a skilled workforce to secure AI/ML systems.

Attacking AI

Attacking AI Platforms
Wiz’s Hillai Ben-Sasson and Sagi Tzadik used malicious models to break security boundaries in AI-as-a-Service platforms like Hugging Face and Replicate.

JFrog’s Shachar Menashe analyzed the attack surface of the six most popular open source MLOps platforms (🖥️ Slides).

Attacking Assistants
The University of Texas at Austin's Ayush RoyChowdhury et al presented ConfusedPilot, an attack that introduces a malicious document into an enterprise's data pool that manipulates Copilot and other RAG-based systems.

Ben-Gurion University's Yisroel Mirsky et al unveiled a novel side-channel vulnerability in popular AI assistants that could allow an attacker to read encrypted messages, affecting OpenAI, Microsoft, Cloudflare, Quora, and Notion before disclosure.

Attacking AI-powered Devices
Harriet Farlow tested casino biometrics, bypassing facial recognition, retina scanners, and surveillance systems.

HiddenLayer's Ryan Tracey, Kasimir Schulz, and Tom Bonner demonstrated how to attack the embedded AI used for person detection on a popular family of security cameras.

Backdooring Models
Suha Hussain discussed injecting model backdoors into ML pipelines using input-handling bugs in ML tools.

Vasilios Mavroudis et al explored the hidden dangers of Deep Reinforcement Learning (DRL) backdoors (🖥️ Slides).

Tooling
NVIDIA’s Leon Derczynski et al introduced garak, a framework for discovering vulnerabilities in LLMs and dialog systems by probing them in a structured fashion.

University of California Irvine's Ian G. Harris introduced FuzzLLM, an automated fuzzing framework to proactively test and discover jailbreak vulnerabilities in LLMs.

Education
Orca Security's Ofir Yakobi and Shir Sadon introduced AI Goat, an open source deliberately vulnerable AI infrastructure for learning about and exploiting OWASP AI Top 10 vulnerabilities.

Methodology
Meta's Maya Pavlova, Ivan Evtimov, Joanna Bitton, and Aaron Grattafiori shared their methodology for red teaming Meta's Llama 3 LLM.

Psyber Labs’ Matthew Canham introduced the Cognitive Attack Taxonomy (CAT), a catalog of over 350 cognitive vulnerabilities, exploits, and TTPs.

Axur’s Larissa Fonseca discussed adversarial attacks applied to ML systems and communities focused on sharing AI Jailbreaks.

AppSec

Code Analysis
Mobb’s Kirill Efimov and Eitan Worcel discussed using AI to auto remediate code vulnerabilities (“AI autofix”) (🖥️ Slides).

Veracode’s Chris Wysopal presented on how GenAI is introducing new AppSec challenges as AI-generated code leads to higher vulnerability velocity (🖥️ Slides).

ByteDance's Wang Zhilong and Xinzhi Luo explored the strengths and limitations of using LLMs for code analysis tasks like code search and clone detection.

Web Security
Palo Alto Networks’ Jay Chen and Ravid Mazon discussed BOLABuster, their approach for automatically finding Broken Object Level Authentication vulnerabilities (BOLA) in web apps and APIs.

Access Control
Instacart’s Dominic Zanardi and Matthew Sullivan developed an LLM-powered AI bot to enable fully-automated just-in-time (JIT) access to critical systems

Threat Modeling
Adam Shostack on how LLMs can help us threat model: what are the challenges, what works, and what doesn’t?

Disinformation and Phishing

Disinformation
Red Canary’s Tessa Mishoe discussed the use of AI in disinformation campaigns around the world.

University of Notre Dram’s Walter Scheirer discussed the history of photographic manipulation from the invention of the camera to generative AI.

Phishing
Arun Vishwanath, Fred Heiding, and Simon Lermen shared their research on how LLMs can be used to create a self-improving phishing bot that automates all five phases of phishing emails (collecting targets, collecting information about the targets, creating emails, sending emails, and validating the results).

KnowBe4's Perry Carpenter demonstrated how he combined LLMs, voice systems, and social engineering tactics to create a powerful backend for real-time, interactive voice-enabled cyber scams.

Blue Team

Threat Detection / Malware Analysis
TXOne Networks' Sheng-Hao Ma, Yi-An Lin, and Mars Cheng proposed CuIDA, a neural-network-based symbolic execution LLM that simulates human expert strategies in analyzing malware, inferring original behaviors of commercial packer-protected programs (VMProtect, Themida) without unpacking (🖥️ Slides).

DBAPPSecurity's Hongfei Wang, Dong Wu, and Yuan Gu used LLMs to identify APT SAAIWC attack samples and generate YARA rules for automated sample hunting.

Oak Ridge National Laboratory's Sean Oesch et al discussed how polyglot files can bypass ML-based malware detection systems (e.g. EDR) and how their custom trained ML model doesn’t have this issue.

Threat Intelligence and Hunting
Ann Johnson and Sherrod DeGrippo discussed how Microsoft is leveraging its global-scale threat intelligence and AI.

Automated Defense
Folks from Cambridge Consultants, BT, BMT, and more summarized their research applying Reinforcement Learning (RL) to automated cyber defense decision making, with promising results in Cyber First Aid, ICS, and autonomous vehicles (🖥️ Slides, 📖 Paper).

Integrating LLMs into Workflows
Expel’s Preeti Ravindra discussed the effects of integrating GenAI tooling into the SOC analyst/SecOps workflow.

Cypienta's Ezz Tahoun demonstrated how to use open source AI tools to map SIEM data to MITRE ATT&CK, cluster alerts into contextualized attack steps, build killchains to uncover coordinated attacks, and generate actionable tickets (FP, Incident, and Attack Story).

Google's Lenin Alevski explored the integration of chatbots and LLMs in cybersecurity, discussing defensive applications like automating tasks (log parsing, web scraping, data analysis) and educating team members.

Microsoft’s Bill Demirkapi shared how MSRC is leveraging fine-tuned LLMs to improve their vulnerability response process, including deriving shareable info about vulns, predicting key facts like severity, and generating root causes from crash dumps.

Honeypots
Niantic's Adel Karimi introduced Galah, an LLM-powered web honeypot that dynamically generates relevant HTTP responses to effectively mimic diverse web applications with a single prompt.

Red Team

Evading Detection
iFood’s Emanuel Valente demonstrated that all state-of-the-art DNS over HTTPS (DoH) tunnel detection ML models are vulnerable to real-world black-box adversarial attacks.

Measuring Capabilities
MITRE's Michael Kouremetis et al. presented a methodology for evaluating the offensive cyber capabilities of LLMs (🖥️ Slides).

Automating Offense
Tejas Patel discussed DARPA’s SMOKE program, which aims to develop data-driven tools to automate planning and executing emulated threat actor infrastructure for red team operations.

XBOW's Joel "Niemand_Sec" Noguera and Diego Jurado discussed their research into creating an AI agent to augment bug bounty and pentesting workflows.

Fluor's Mike Holcomb demonstrated how ChatGPT can be used to rapidly develop offensive and defensive security tools, such as identifying unknown network assets as a listening service, taking a PLC out of Run mode remotely, and converting Python scripts to PowerShell on the fly to avoid detection.

Public Policy

AI Ethics and Safety
NIST’s Raymond Sheh shared an overview of how public safety approaches the topic of technology.

Kudelski Security’s Nathan Hamiel, Microsoft’s Amanda Minnich, NVIDIA’s Nikki Pope, and Google Deep Mind’s Mikel Rodriguez argue that organizations have a responsibility to ensure AI Safety to prevent various harms from manifesting.

Copenhagen Business School's Jan Trzaskowski examined how AI can manipulate human behavior and argued for safeguards to protect human dignity.

National Security
Mileva Security Labs' Harriet Farlow, an ex-intelligence professional, discusses the intersection of AI and national security.

Singapore’s Home Team Science and Technology Agency CIO NG Yeow Boon discussed the critical role of cybersecurity in AI-driven homeland security.

Legal System
Academics analyzed whether prompt injection attacks against LLMs could be considered hacking under the CFAA, finding that red teaming LLMs via prompt injection may indeed violate the CFAA (🖥️ Slides).

Aquarious Workman introduced The Transparency Algorithm, an AI/ML-powered initiative that analyzes public court case data to expose hidden biases and discriminatory practices in the US legal system.

AI History, OSINT, and Misc

OSINT
Microsoft's Mike Raggo discussed using Azure and AWS computer vision to rapidly process large sets of image and video data for forensic evidence.

Healthcare
How ARPA-H is enabling strong cybersecurity in healthcare innovation.

Center for Medicare and Medicaid Services’ Remy DeCausemaker shared the programs, policies, and projects the Digital Service at CMS.gov are building to identify and mitigate continuity and security risks in the software supply chain across the Federal Ecosystem.

Misc
Undaunted’s Mark Griffin discussed how his team took competitor AIxCC submissions and translated them into the visual AIxCC competition experience at DEF CON.

A panel on the intersection of AI and cybersecurity in the 5G network environment.

Talk Categories

To make your search for relevant info easier, I’ve tagged each abstract based on topics so you can just CTRL+F for the ones of most interest.

Here are the list of categories:

  • #securing_ai - Securing or hardening systems and applications that use AI, protecting the models themselves or their supply chain.

  • #attacking_ai - Attacking systems that use AI/ML or LLMs, adversarial techniques, prompt injections, jailbreaking.

  • #ai_history - Providing historical context around AI and/or AI security.

  • #appsec - Topics related so software security: auto-fixing code, secure code review, web security, the security of AI-generated code.

  • #disinformation - Fake news and images, disinformation campaigns, etc.

  • #phishing - Sending or detecting phishing emails.

  • #blue_team - Detection and response, threat hunting, threat intel, general defensive security.

  • #red_team - Using AI for offensive security, evading detection, automating bug bounty or penetration testing, etc.

  • #osint - Open Source Intelligence, analyzing images, audio, video, or other medium at scale.

  • #public_policy - Related to government or general public policy, AI safety, AI ethics.

  • #misc - Miscellaneous talks that don’t quite fit into other buckets.

Longer Summaries (~2-3 sentences)

This section gives an overview of the same talks as above, but tries to capture as much relevant details from the abstract as possible in ~2-3 sentences so you can quickly get an idea of what they’re about and if you want to investigate them further.

Think of this section as slightly more complete / thorough versions of the above Short Summaries.

#securing_ai

HiddenLayer’s Chloé Messdaghi gave a 101 security for AI basics talk.

Richard Harang of NVIDIA's AI Red Team shared their practical findings on securing dozens of LLM-powered applications, including the most common and impactful attacks, how to assess LLM integrations, and how to design more secure integrations from first principles (🖥️ Slides).

MITRE's Christina Liaghati discussed MITRE ATLAS, a public knowledge base of AI adversary tactics and techniques based on real-world attack observations and realistic demonstrations from AI red teams, and their latest efforts to capture real world AI incident data, expand data on vulnerabilities in open-source AI models/data, and develop mitigations to defend against AI security threats.

CISA's Christine Lai, SMU's David Lo, Seed AI's Austin Carson, and Dreadnode's Nick Landers discussed the challenges and opportunities in developing a skilled workforce to secure AI/ML systems against adversarial attacks, and strategies for cultivating hands-on experience defending ML models in real-world adversarial environments.

Meta's Cyrus Nikolaidis and Faizan Ahmad presented CyberSecEval Prompt Injection benchmarks, a benchmark for evaluating the susceptibility of LLMs to direct and indirect prompt injection attacks, and PromptGuard, a model for detecting both direct jailbreak and indirect injection attacks in LLM-powered applications.

Sage's Andra Lezza and Javan Rasokat shared lessons learned building and defending LLM applications, including critical vulnerabilities discovered, innovative defense strategies employed, and practical tips for enhancing the security and robustness of AI applications.

#attacking_ai

Axur’s Larissa Fonseca discussed adversarial attacks applied to ML systems, communities focused on sharing AI Jailbreaks, and their application to the most used AIs in the market.

Harriet Farlow discussed testing the biometrics of Australia’s 4th finest casino, Canberra Casino, bypassing facial recognition, retina scanners, and surveillance systems using adversarial machine learning techniques.

Trail of Bits’ Suha Hussain discussed exploits in which attackers inject model backdoors into ML pipelines using input-handling bugs in ML tools. Suha et al exploited ML model serialization bugs in popular tools to construct backdoors, and in the process, developed malicious artifacts such as polyglot and ambiguous files using ML model files. Also check out the pickle security tool Fickling.

Psyber Labs’ Matthew Canham introduced the Cognitive Attack Taxonomy (CAT), a catalog of over 350 cognitive vulnerabilities, exploits, and TTPs which can be applied to humans, AI, and non-human biological entities. The CAT enables on-demand creation of cognitive attack graphs and kill chains for nearly any target.

Wiz's Hillai Ben-Sasson and Sagi Tzadik presented a novel attack technique, using malicious models to break security boundaries in AI-as-a-Service platforms like Hugging Face and Replicate, allowing them to access customers' private models, weights, datasets, and user prompts, as well as backdoor popular models to launch supply chain attacks.

JFrog's Shachar Menashe analyzed the attack surface of the six most popular open source MLOps platforms, showing how each MLOps feature maps to real-world attacks, and how inherent vulnerabilities in the formats used by these platforms can be abused to infect an entire organization, even when fully patched (🖥️ Slides).

The Alan Turing Institute's Vasilios Mavroudis et al explored the hidden dangers of Deep Reinforcement Learning (DRL) backdoors, demonstrating how adversaries can embed backdoors in DRL models through supply chain attacks, and shared techniques to detect and neutralize these threats in real-time (🖥️ Slides).

HiddenLayer's Ryan Tracey, Kasimir Schulz, and Tom Bonner demonstrated how to attack the embedded AI used for person detection on a popular family of security cameras, allowing an attacker to evade detection without suspicious disguises. They reverse engineered the proprietary model format, gained root access to the device, and performed attacks against the model and camera.

NVIDIA’s Leon Derczynski et al introduced garak (Generative AI Red-teaming and Assessment Kit), a framework for discovering vulnerabilities in LLMs and dialog systems by probing them in a structured fashion, helping inform alignment and policy discussions for LLM deployment.

The University of Texas at Austin's Ayush RoyChowdhury et al presented ConfusedPilot, an attack that introduces a malicious document into an enterprise's data pool that manipulates Copilot and other RAG-based systems, causing them to respond to user queries with misinformation that persists even after the malicious document is deleted.

Orca Security's Ofir Yakobi and Shir Sadon introduced AI Goat, an open source deliberately vulnerable AI infrastructure for learning about and exploiting OWASP AI Top 10 vulnerabilities.

University of California Irvine's Ian G. Harris introduced FuzzLLM, an automated fuzzing framework to proactively test and discover jailbreak vulnerabilities in LLMs by utilizing templates to capture prompt structure, isolating key jailbreak features as constraints, and integrating base classes into combo attacks.

Meta's Maya Pavlova, Ivan Evtimov, Joanna Bitton, and Aaron "dyn" Grattafiori shared their methodology for red teaming Meta's Llama 3 LLM, including discovering new risks, scaling attacks with automation, their novel approach using multi-turn adversarial AI agents, and benchmarking safety across high-risk areas with their open source CyberSecEvals framework.

Ben-Gurion University's Yisroel Mirsky, Guy Amit, Daniel Ayzenshteyn, and Roy Weiss unveiled a novel side-channel vulnerability in popular AI assistants that could allow an attacker to read encrypted messages, affecting OpenAI, Microsoft, Cloudflare, Quora, and Notion before disclosure.

#ai_history

Sven Cattell gave the opening keynote of BSidesLV: "Secure AI" is 20 years old.

Pulling from over 25 years of experience, Google's Heather Adkins discussed how the threat landscape has evolved with the introduction of AI, lessons learned applying AI for security, and how AI can be used to protect critical infrastructure.

#appsec

Mobb’s Kirill Efimov and Eitan Worcel discussed the complexities of using AI in auto remediating code vulnerabilities (“AI autofix”), and the benefits of augmenting your approach with RAG (🖥️ Slides).

Palo Alto Networks’ Jay Chen and Ravid Mazon discussed BOLABuster, their approach for automatically finding Broken Object Level Authentication vulnerabilities (BOLA) in web apps and APIs, leveraging LLMs to understand application logic, reveal endpoint dependencies, generate test cases, and interpret results.

Instacart’s Dominic Zanardi and Matthew Sullivan developed an LLM-powered AI bot to enable fully-automated just-in-time (JIT) access to critical systems, balancing employee productivity with security assurance and oversight. They share tactics for using LLMs to enhance internal security tooling and workflows.

Veracode’s Chris Wysopal gave an overview of how GenAI is transforming software development, introducing new AppSec challenges as AI-generated code leads to higher vulnerability velocity due to the models being trained on vulnerable open source code, and developers over-trusting AI-generated code.

ByteDance's Wang Zhilong and Xinzhi Luo explored the strengths and limitations of using LLMs for code analysis tasks like code search and clone detection, showing examples of when LLMs make mistakes, such as struggling with misleading variable and function names.

Adam Shostack on how LLMs can help us threat model: what are the challenges, what works, and what doesn’t?

#disinformation

Red Canary’s Tessa Mishoe discussed the use of AI in disinformation campaigns around the world, including past campaigns and their long term effects.

University of Notre Dram’s Walter Scheirer discussed the history of photographic manipulation from the invention of the camera to generative AI, based on his recent book "A History of Fake Things on the Internet".

#phishing

Arun Vishwanath, Fred Heiding, and Simon Lermen shared their research on how LLMs can be used to create a self-improving phishing bot that automates all five phases of phishing emails (collecting targets, collecting information about the targets, creating emails, sending emails, and validating the results). They then show how to counter AI-enabled phishing bots via personalized spam filters and a digital footprint cleaner.

Sublime Security’s Josh Kamdjou showed real, in-the-wild phishing attacks with completely fabricated contents, including conversations between multiple individuals that never happened, to demonstrate the sophistication LLMs can afford attackers in conducting convincing phishing campaigns at scale.

KnowBe4's Perry Carpenter demonstrated how he combined LLMs, voice systems, and social engineering tactics to create a powerful backend for real-time, interactive voice-enabled cyber scams, and shared insights from his research pushing AI past ethical boundaries.

#blue_team

Expel’s Preeti Ravindra discussed the effects of integrating GenAI tooling into the SOC analyst/SecOps workflow, and insights gained around friction points.

Microsoft’s Bill Demirkapi shared how MSRC is leveraging fine-tuned LLMs to improve their vulnerability response process, including deriving shareable info about vulns, predicting key facts like severity, and generating root causes from crash dumps.

DBAPPSecurity's Hongfei Wang, Dong Wu, and Yuan Gu discussed how they used LLMs to identify APT SAAIWC attack samples, generate YARA rules for automated sample hunting, and apply LLMs more broadly to threat intelligence and hunting.

TXOne Networks' Sheng-Hao Ma, Yi-An Lin, and Mars Cheng proposed CuIDA, a novel neural-network-based symbolic execution LLM that simulates human expert strategies in analyzing malware, successfully identifying up to 67% of missed threats in their experiment and even inferring original behaviors of commercial packer-protected programs (VMProtect, Themida) without unpacking (🖥️ Slides).

Folks from Cambridge Consultants, BT, BMT, and more summarized their research applying Reinforcement Learning (RL) to automated cyber defense decision making, including two contrasting Multi Agent RL approaches and deep RL combined with Graph Neural Networks, with promising results in Cyber First Aid, ICS, and autonomous vehicles (🖥️ Slides, 📖 Paper).

Ann Johnson and Sherrod DeGrippo discussed how Microsoft is leveraging its global-scale threat intelligence and AI to make smarter security choices, pioneer innovative security strategies and solutions, how threat intelligence shapes security strategy, and how AI is transforming what we know about security today.

Oak Ridge National Laboratory's Sean Oesch et al discussed how polyglot files can bypass ML-based malware detection systems (e.g. EDR) as the polyglot file is sent to the wrong model because the embedded file type is not detected, and how their custom trained ML model can detect all polyglot types in their dataset with over 99.9% accuracy.

Google's Lenin Alevski explored the integration of chatbots and LLMs in cybersecurity, discussing defensive applications like automating tasks (log parsing, web scraping, data analysis) and educating team members, as well as offensive uses like social engineering, phishing simulations, and automating attack techniques.

Cypienta's Ezz Tahoun demonstrated how to use open source AI tools to map SIEM data to MITRE ATT&CK, cluster alerts into contextualized attack steps, build killchains to uncover coordinated attacks, and generate actionable tickets (FP, Incident, and Attack Story) to facilitate immediate response and improve security posture.

Niantic's Adel Karimi introduced Galah, an LLM-powered web honeypot that dynamically generates relevant HTTP responses to effectively mimic diverse web applications with a single prompt, and shared lessons learned on LLM performance for HTTP message generation and the impact of authentic responses on attacker engagement.

#red_team

iFood’s Emanuel Valente demonstrated that all state-of-the-art DNS over HTTPS (DoH) tunnel detection ML models are vulnerable to real-world black-box adversarial attacks, and shared specific features developers should avoid using to prevent evasion.

MITRE's Michael Kouremetis et al. presented a rigorous methodology for evaluating the offensive cyber capabilities of LLMs, including prompt and response evaluation, high-fidelity cyber-attack simulations, and emulation test scenarios on real targets to determine if increasing levels of offensive cyber capabilities exist within a model's capacity (🖥️ Slides).

Fluor's Mike Holcomb demonstrated how ChatGPT can be used to rapidly develop offensive and defensive security tools, such as identifying unknown network assets as a listening service, taking a PLC out of Run mode remotely, and converting Python scripts to PowerShell on the fly to avoid detection.

Tejas Patel discussed DARPA’s Signature Management using Operational Knowledge and Environments (SMOKE) program, which aims to develop data-driven tools to automate planning and executing emulated threat actor infrastructure for red team operations.

XBOW's Joel "Niemand_Sec" Noguera and Diego Jurado discussed their research into creating an AI agent to augment bug bounty and pentesting workflows, aiming to enhance the efficiency and effectiveness of security researchers by mimicking and enhancing human intuition and creativity in identifying vulnerabilities.

#osint

Microsoft's Mike Raggo discussed using Azure and AWS computer vision to rapidly process large sets of image and video data for forensic evidence, including object detection, facial recognition, OCR, and voice to text analysis, and applying chronolocation analysis to paint a true-life narrative, for example an individual with a weapon at a location at a particular time.

#public_policy

NIST’s Raymond Sheh shared an overview of how public safety approaches the topic of technology, where there are gaps in their understanding, and the impacts that they can have on their ability to keep us safe, and how the security community can get involved.

Kudelski Security’s Nathan Hamiel, Microsoft’s Amanda Minnich, NVIDIA’s Nikki Pope, and Google Deep Mind’s Mikel Rodriguez argue that as AI systems become more ubiquitous, organizations have a responsibility to ensure AI Safety to prevent various harms from manifesting. They discuss the challenges, responsibilities, and role of security professionals in addressing AI Safety.

Academics analyzed whether prompt injection attacks against LLMs could be considered hacking under the CFAA, finding that red teaming LLMs via prompt injection may indeed violate the CFAA (🖥️ Slides).

NVIDIA's Nikki Pope discussed the ethical challenges of using AI/ML in AI-driven security systems, such as fairness, transparency, and accountability, and the importance of ethical AI development practices like diverse teams, bias testing, and ongoing audits.

Mileva Security Labs' Harriet Farlow, an ex-intelligence professional, discusses the intersection of AI and national security through the lens of pop culture, covering the extent to which fears surrounding AI systems are applicable to real life national security settings, and unraveling the interplay between hackers, AI, and government agencies with demos of how these AI systems work under the hood.

Copenhagen Business School's Jan Trzaskowski examined how AI can manipulate human behavior and argued for safeguards to protect human dignity, as information asymmetries have reduced individual agency and transparency.

Singapore’s Home Team Science and Technology Agency CIO NG Yeow Boon discussed the critical role of cybersecurity in AI-driven homeland security, highlighting the importance of keeping cybersecurity at the forefront during development and in organizational best practices to mitigate risks from the increased attack surface of AI-powered technologies.

Archer Integrated Risk Management's Aquarious Workman introduced The Transparency Algorithm (TTA), an AI/ML-powered initiative that analyzes public court case data to expose hidden biases and discriminatory practices in the US legal system, with the goal of bringing transparency, accountability, and equality.

#misc

ARPA's Andrew Carney, Dr. Jennifer Roberts, and Dr. Susan Coller Monarez discussed how ARPA-H is enabling strong cybersecurity in healthcare innovation through the AIxCC, DIGIHEALS, and UPGRADE programs.

Center for Medicare and Medicaid Services’ Remy DeCausemaker shared the programs, policies, and projects the Digital Service at CMS.gov are building to identify and mitigate continuity and security risks in the software supply chain across the Federal Ecosystem.

Undaunted’s Mark Griffin discussed how his team took competitor AIxCC submissions and translated them into the visual AIxCC competition experience at DEF CON.

A panel on the intersection of AI and cybersecurity in the context of the rapidly evolving 5G network environment.

Wrapping Up

Phew, you made it! 🙌 

Did you find this useful? Feel free to let me know.

And any feedback on what would make this, or a similar post, more useful in the future, I’d love to hear.

Once more, here are the links to the pages listing the full talk abstracts: