- tl;dr sec
- Posts
- AI and Machine Learning in Cybersecurity
AI and Machine Learning in Cybersecurity
An overview of current applications of AI/ML to cybersecurity with relevant links and a vision of where things are headed.
Ever since ChatGPT (and arguably before), the pace of innovation in machine learning / AI has been quite impressive.
This post is my attempt to:
Consolidate the many recent applications of AI to cybersecurity in one place
Tie together trends
And reason about where things may be headed.
Given the pace at which things are moving, this will likely be an incomplete list.
If there are meaningful blog posts, talks, or tools I should be aware of, please reach out and let me know! đ
Table of Contents
Changelog
2023/07/03 - Wrote section on current applications/future work on AI in reverse engineering and added a number of relevant tools to the References section.
Themes and Where Things are Headed
Reverse Engineering
Current Applications
At the time of this writing, applications of AI in reverse engineering seem to fall into a very finite set of buckets.
Nearly all tools leverage LLMs to aid in program understanding:
Generate a plain text description of this {assembly or decompiled code}
Help me rename this {variable or function}
And less frequently:
Audit this code for vulnerabilities
Validate another toolâs security findings (Semgrep, in Callistoâs case)
See the âReverse Engineeringâ subsection in the References section for a summary of ~8 tools you can review.
Future Work
Here are a few applications I find promising that do not yet seem to be explored.
Lower the barrier to entry for complex tools
IDA Pro, Ghidra, Binary Ninja, and the like are awesome, powerful tools, but they can be complex to learn how to use and have a high learning curve.
Instead of having to navigate many menus or read separate documentation, what if all of that info was indexed and there was helpful chat box in which you could simply ask, âHow do I do ?â and it would give you a response right there, without interrupting your flow.
Personal Assistant
What if your tool was observing your behavior over time, after reversing many samples, and if you get stuck, perhaps offered advice like, âLast time you were in {situation}, you did {actions}, what if you tried that?â
And perhaps even automatically takes those actions for you, especially if, for example, there are a set of things you always do when starting a new reversing project.
Personal Expert Mentor
Building on the personal assistant idea, what if there was a model trained on the reversing behaviors and workflows of the best reversers in the world?
And as you were reversing, youâd see a little pop-up, kind of like auto-complete in GMail or a Hacker Clippy, thatâd recommend (and maybe even do for you) what an expert would do.
You could imagine a solid implementation of this enabling junior reversers to perform at an intermediate level much more quickly, and maybe even make intermediate or advanced reversers even more effective.
An ideal implementation would not just say, âDo this,â but rather additionally provide the reasoning and context, the why behind it, to upskill the userâs thought processes, not just actions.
Lastly, Iâm not sure if this would be as useful as the above, but it would be interesting for the expert bot to even get down to as specific as understanding a particular reverserâs style, like âIn this situation, Rolf Rolles would do {this} or Malware Unicorn or Azeria would do {that}.â
Perhaps this could even fund a creator-economy / expert-model-as-a-service income stream for skilled reversers where they could bundle that knowledge and expertise and license the model, which individuals or companies could subscribe to.
Index and make available relevant context
There are a number of sources of information that could be indexed (for example, in a vector DB) and made available to a local model in your reversing platform of choice that would save you time providing context and preempting duplicate work.
For example:
Hashes, instruction sequences, and other IoCs of known malware, from VirusTotal, company blog write-ups, threat intel companies, and more.
This would make it easy to know if you're reversing something similar or identical to known malware, so rather than reversing from scratch, you could leverage (and maybe programmatically import) the work others have already done.
Indicators for common packers and obfuscators, as well as how to handle them.
Sharing proprietary data from your colleagues who may have done similar work.
Indexing known/popular functions or libraries so youâre wasting time reversing âknownâ functionality.
If you have feedback on any of these ideas or have other ideas I havenât listed, Iâd love to hear them! Happy to add them to this list if youâd like and credit you.
References
Web Security
Using LLMs to reverse JavaScript variable name minification
Jesse Luoto writes about how to reverse minified JavaScript using LLMs like ChatGPT and llama2 while keeping the code semantically intact.
Unminify and prettify the code.
Ask the LLM to describe the intent and a better name for variables.
Use Babel to do the renaming, which can effectively rename a JavaScript variable within its scope by operating on the code's Abstract Syntax Tree (AST), preserving how the code works.
How to automate API Specifications for Continuous Security Testing
Escapeâs Younes Haddou describes a project in automatically generating an OpenAPI specification from source code, using Semgrep to extract routes and parameters and an LLM to infer the types of parameters.
tenable/Burp-extension-for-GPT
A Burp Suite extension that leverages OpenAI to analyze HTTP traffic and identify potential security concerns. âWe tested some real-world scenarios and we found that GPT3.5 and GPT4 successfully identified XSS and misconfigured HTTP headers without requiring any additional fine-tuning.â
aress31/burpgpt
A Burp Suite extension that integrates OpenAI's GPT to perform an additional passive scan on web traffic observed. It also allows you to provide customizable prompts. This repo is no longer maintained, as the creator has launched a commercial version, which also supports local models.
/
AppSec
AI & Cybersecurity: Learnings from three months of Semgrep Assistant
Bence Nagy shares learnings from the private beta of GPT-4-powered Semgrep Assistant, which so far has received 95% positive feedback on its recommendations. Assistant is now in public beta, so if you use GitHub.com, you can now start using it immediately for free.
OX-GPT
OX Securityâs Yael Citro describes OX-GPT, which can describe the impact of security issues, how to solve it, and recommend code that fixes it. They describe it as âAppSecâs First ChatGPT Integration,â despite Semgrep launching basically the same thing a month earlier.
Secure your machine learning with Semgrep
To strengthen the machine learning ecosystem, Trail of Bitsâs Suha Hussain has analyzed the source code of libraries and identified some common problematic patterns. They have turned those patterns into 11 newly released Semgrep rules so itâs easy to find and fix those potential bugs.
Cracking Random Number Generators using Machine Learning â Part 1: xorshift128
NCC Groupâs Mostafa Hassan shows how machine learning can predict the sequence of xorshift128âs random numbers using previously generated numbers without the knowledge of the seed. In part 2, he tackles Mersenne Twister.
Penetration Testing
GreyDGL/PentestGPT
A GPT-empowered penetration testing tool, by Gelei Deng, VĂctor Mayoral Vilches, et al.
Attack Surface Management
projectdiscovery/openrisk
Tool by Project Discovery that reads nuclei (an OSS vulnerability scanner) output and generates a risk score for the host using GPT-3.
Code Review
Editor's note: Iâve seen a number of blog posts where people use LLMs for auditing source code. Some of them have pretty poor methodologies. The Trail of Bits post below is probably the best one I've seen so far in teasing out nuances.
I think there's a lot of improvements and potential future work in this space, I just want to point out the challenges and nuances so next time you see a blog post or company like, "ZOMG LLMs can find every vulnerability evar!" you'll think critically about their approach and methodology.
Using AI to find software vulnerabilities in XNU
Tim Michaud shares his experience of using a codebert-c model (an open source model from HuggingFace) trained with the codeparrot/github-code-clean dataset to pinpoint a memory corruption vulnerability affecting XNU.
Also, TIL about Fill-Mask: mask some of the words in a sentence and predict which words should replace those masks.
In the current state of the AIs used (Which could VERY well be due to my misuse of them), I did not find this a compelling use case. Perhaps if I understood things better and had the time/patience/more effort to put into this it could've been better (And likely would be!).
Codex (and GPT-4) canât beat humans on smart contract audits
Trail of Bits' Artem Dinaburg, Josselin Feist, and Riccardo Schirone discuss their initial investigations into using LLMs for security auditing Solidity code. In short, it's not great at it today and the tooling is nascent.
There is no particularly good way to determine if one prompt is better than another or if youâre on the right track. Similarly, when a prompt fails against an input, it is frustratingly difficult to figure out why and to determine, programmatically, which prompts are merely returning the wrong result versus completely hallucinating and misbehaving.
Unit tests are also problematic; the results are not guaranteed to be the same across runs, and newer models may not provide the same results as prior ones. There is certainly a solution here, but again, the tooling developers expect just wasnât present.
And a đśď¸ take:
people simping over AI based audits are going to lose their heads when they discover static analysis
â Benjamin Samuels (@thebensams)
5:14 PM ⢠Jun 4, 2023
Measuring LLMs ability to find security issues in code
GP Sandhu scans the OWASP Java Benchmark for vulnerabilities with both GPT-3.5 and GPT-4, measuring the true positive, false positive, and false negative rates, and released a repo with the results. Both models did fairly well finding true positives (especially GPT-4), but they both suffered from high false positives rates. GP also points out that this can get expensive: $5 for GPT-3.5 and ~$100 for GPT-4 to scan 2,740 Java files.
Lost in ChatGPTâs memories: escaping ChatGPT-3.5 memory issues to write CVE PoCs
Altin delves into ChatGPTâs memory limitations, offering solutions to escape the 4096-token limit, and outlining how to use ChatGPT as an assistant to analyze large codebases and write a CVE PoC for a resource exhaustion vulnerability discovered in Goâs textproto package.
The ChatGPT plugin OpenAI was using to assess the security of other plugins
rez0 shares the prompt, which was broken down into instructions, facts, and OpenAIâs policy.
Security Code Review With ChatGPT
NCC Groupâs Chris Anley walks through having ChatGPT explain several vulnerabilities in the âDamn Vulnerable Web Applicationâ (DVWA) and analyzes its results.
TL;DR: Donât use ChatGPT for security code review. Itâs not meant to be used that way, it doesnât really work (although you might be fooled into thinking it does), and there are some other major problems that make it impractical. Also, both the CEO of OpenAI and ChatGPT itself say that you shouldnât.
Overview Articles
How foundation models reshape cybersecurity tooling
Innovation Endeavorsâ Harpi Singh and Dhruv Iyer do a nice round-up of current applications of LLMs to cybersecurity (search, code writing, vulnerability explanation, incident response and threat intelligence) across a number of vendors, and discuss promising opportunities: penetration testing, security reviews, and security-as-code generation.
Large language models in security
Innovation Endeavorsâs Davis Treybig discusses a number of areas where he believes LLMs can be applied or integrated within security: SIEM, SAST, email security and phishing, SOAR, data loss prevention and data security, remediation, translating between security and everyone else, and vendor security questionnaires and compliance.
tenable/awesome-llm-cybersecurity-tools
A curated list of cybersecurity tools that leverage LLMs, by Tenableâs Olivia Fraser and Blake Kizer, across reverse engineering, network analysis, cloud security, and PoCs.
Can Generative AI Improve Your Cybersecurity Posture in 2023 and Beyond?
Mark Lynd provides an overview of some ways that AI can be applied to security, and lists a number of security vendors that have already integrated AI in some way into their products.
Limmen/awesome-rl-for-cybersecurity
A curated list of resources dedicated to reinforcement learning applied to cyber security, by Kim Hammar.
Cloud Security
AWS Docs GPT
AI-powered search & chat for AWS documentation.
LeanerCloud/ChatGPT-cloud-plugin
A ChatGPT plugin by Cristian Magherusan-Stanciu that will (one day) allow you to list and manage cloud resources from ChatGPT.
tenable/EscalateGPT
An AI-powered tool for discovering privilege escalation opportunities in AWS IAM configurations, by Tenable. âIn our testing against real-world AWS environments, we found that GPT4 managed to identify complex scenarios of privilege escalation based on non-trivial policies through multi-IAM accounts.â
Container Security
robusta-dev/chatgpt-yaml-generator
By Robusta.dev: A ChatGPT plugin that gives ChatGPT up-to-date knowledge of every Kubernetes resource and their complete specs, lets ChatGPT query the Kubernetes OpenAPI schema to fetch resource definitions, and provides ChatGPT with schema validation capabilities.
Phishing / Disinformation
WormGPT â The Generative AI Tool Cybercriminals Are Using to Launch Business Email Compromise Attacks
Daniel Kelley covers a new cybercrime tool that enables more effective phishing emails by leveraging LLMs, cybercrime forum discussions around âjailbreaks,â and more.
Wasting Scam Callersâ Time with AI
A service that leverages GPT-4 to waste the time of telemarketers and scammers. The audio samples are pretty hilarious.
China is getting better at using AI to generate content designed to go viral in U.S. and other democracies. MS threat intel says China has âsuccessfully engaged target audiences in at least 40 languages and grown its audience to over 103 million.â
CICERO: An AI agent that negotiates, persuades, and cooperates with people
Meta announces CICERO, the first AI to achieve human-level performance in the popular strategy game Diplomacy*. CICERO demonstrated this by playing on webDiplomacy.net, an online version of the game, where CICERO achieved more than double the average score of the human players and ranked in the top 10 percent of participants who played more than one game.
Diplomacy has been viewed for decades as a near-impossible grand challenge in AI because it requires players to master the art of understanding other peopleâs motivations and perspectives; make complex plans and adjust strategies; and then use natural language to reach agreements with other people, convince them to form partnerships and alliances, and more. CICERO is so effective at using natural language to negotiate with people in Diplomacy that they often favored working with CICERO over other human participants.
CICERO can deduce, for example, that later in the game it will need the support of one particular player, and then craft a strategy to win that personâs favor â and even recognize the risks and opportunities that that player sees from their particular point of view.
It is important to recognize that CICERO also sometimes generates inconsistent dialogue that can undermine its objectives.
While CICERO is only capable of playing Diplomacy, the technology behind this achievement is relevant to many real world applications. Controlling natural language generation via planning and RL, could, for example, ease communication barriers between humans and AI-powered agents. For instance, todayâs AI assistants excel at simple question-answering tasks, like telling you the weather, but what if they could maintain a long-term conversation with the goal of teaching you a new skill?
Malicious Dependencies / Supply Chain Security
LLM-assisted Malware Review: AI and Humans Join Forces to Combat Malware
Endor Labsâ Henrik Plate describes some experiments they performed that suggest that LLM-based malware reviews can complement, but not yet substitute for human reviews.
1800 artifacts from PyPi and npm â 34 flagged as malware, 19/34 true positives.
Signals like in the Backstabberâs Knife Collection used.
âFalse-positives are predominantly due to incomplete and syntactically incorrect code snippets, which commonly happens if the promptâs length restriction prevents us from uploading an entire file.â
âGPT can be tricked with help of several simple techniques to change an assessment from malicious to benign, e.g., by using innocent function names, including comments that indicate benign functionality or through inclusion of string literals.â
Reviewing Malware with LLMs: OpenAI vs. Vertex AI
Follow-up post in which Henrik makes several improvements and also tests Googleâs Vertex AI.
Analysis Improvements
The removal of comments in suspicious code snippets (using Pygments) reduced exposure to prompt injection.
Asked for 0-9 risk score instead of binary classification.
Increased the context size, which also benefits from comment removals.
>90% of the time the two models rated within 1 point of the same score.
GPT-4 outperforms the other models for non-obfuscated code - better risk ratings and source code explanations.
Introducing Socket AI â ChatGPT-Powered Threat Analysis
Socketâs Mikola Lysenko describes how theyâre using LLMs to find malicious packages or potentially dangerous/vulnerable npm and PyPi packages. He gives examples of information exfiltration, injection vulnerabilities, exposed credentials, backdoors, code that installs dependencies, prompt poisoning, etc. Hereâs their earlier post on the same topic.
Self-enhancing pattern detection with LLMs: Our answer to uncovering malicious packages at scale
Apiiroâs Eli Shalom and Gil David describe how they combine a representation of a target packageâs logic, capability analysis (e.g. does the package write files, perform HTTP requests, âŚ), comparison to known malicious packages, clustering, and more to detect malicious packages at scale. Examples of malicious PyPi packages found.
Can you trust ChatGPTâs package recommendations?
Vulcanâs Bar Lanyado describes how an attacker can typosquat library names that ChatGPT is likely to hallucinate so that when developers use code recommended by the LLM they install the attackerâs malicious package instead.
Threat Modeling
Threat Modeling Example with ChatGPT
Rusty Newton walks through using ChatGPT to describe an example system architecture, the data flow between components, generate a list of threats using STRIDE, and then make recommendations on mitigating those threats.
More on GPT-3 and threat modeling
Adam Shostack asks a series of prompts about Kubernetes-related threats. Overall I donât feel like it did a very good job at making them Kubernetes-specific.
Leveraging LLMs for Threat Modeling - GPT-3.5 vs Claude 2 vs GPT-4
Marcin Niemiec provided a fictitious high level project description, architecture description, and user story to the three models and evaluated how well they threat modeled. tl;dr: GPT-4 performed the best, you can review all of their outputs, and he created a GitHub Action to auto-threat model/security review.
DiagramGPT
(Not directly TM-ing, but related) Paste in a schema, infrastructure definition, or code snippet, or describe your diagram in plain language â> generate diagram.
Blue Team
Model-Assisted Threat Hunting (M-ATH) with the PEAK Framework
Splunkâs Ryan Fetterman continues his series on the PEAK Threat Hunting Framework. In this article, he explores the Model-Assisted Threat Hunting (M-ATH) technique, which leverages an algorithmic-driven approach to uncover potential leads for threat hunting. By streamlining or consolidating data, this method paves the way for more traditional hunting techniques.
Google brings generative AI to cybersecurity
Google announces Cloud Security AI Workbench, a cybersecurity suite powered by a specialized âsecurityâ AI language model called Sec-PaLM. Applications:
Mandiantâs Threat Intelligence AI, which will leverage Sec-PaLM to find, summarize and act on security threats.
VirusTotal: use Sec-PaLM to help subscribers analyze and explain the behavior of malicious scripts.
Chronicle search security events and interact âconversationallyâ with the results.
Security Command Center AI: âhuman-readableâ explanations of attack exposure, including impacted assets, recommended mitigations and risk summaries for security, compliance and privacy findings.
We present That Escalated Quickly (TEQ), a machine learning framework that reduces alert fatigue with minimal changes to SOC workflows by predicting alert-level and incident-level actionability. On real-world data, the system is able to reduce the time it takes to respond to actionable incidents by 22.9%, suppress 54% of false positives with a 95.1% detection rate, and reduce the number of alerts an analyst needs to investigate within singular incidents by 14%.
Our TRAM Large Language Model Automates TTP Identification in CTI Reports
MITREâs James Ross and Jackie Lasky describe how theyâve extended TRAM (Threat Report ATT&CK Mapper), a tool to automatically identify adversary tactics, techniques, and procedures (TTPs) in cyber threat intelligence (CTI) reports, to use LLMsâ specifically, by fine-tuning SciBERT on 50 ATT&CK techniques.
mrwadams/attackgen
By Santanderâs Matthew Adams: A tool that leverages LLMs and MITRE ATT&CK to generate tailored incident response scenarios based on user-selected threat actor groups and your organization's details.
fabric/create_investigation_visualization
Neat new prompt by my friend Daniel Miessler that creates a nice overview image of an investigation. See UL 426 for examples of images generated based on a recent John Hammond video on the Apex Legends tournament hack, and the Havana Syndrome investigation.
Applying LLMs to Threat Intelligence
Thomas Roccia walks through prompt engineering, few shot prompting (e.g. output Mermaid mindmap), using Retrieval Augmented Generation (RAG) to ask questions of MITRE ATT&CK Groups information, and building a ReAct Agent that wraps a number of functions from MSTICpy (a Python library dedicated to threat intelligence investigations) as Tools, enabling the Agent to autonomously do things like query VirusTotal for a specific IP address, fetch samples from VirusTotal that communicate with a given IP address, and more.
Thomas has also launched The Intel Brief, a weekly newsletter that gives you an LLM-distilled summary of the top five threat intel reports and a mind map summary. He has kindly shared a Jupyter notebook that uses few-shot learning to automatically generate the summary and visualization.
Red Team
jiep/offensive-ai-compilation
Great list of useful resources on attacking AI models and using AI for offensive purposes (pentesting, malware, OSINT, phishing, generative AI, etc.) by JosĂŠ Escribano and Miguel Boza.
Fuzzing
AI-Powered Fuzzing: Breaking the Bug Hunting Barrier
Googleâs Dongge Liu, Jonathan Metzman, and Oliver Chang describe how they were able to use an LLM to automatically write new fuzz targets for projects already being fuzzed by OSS-Fuzz, leading to greater code coverage and moar bugs!
Writing fuzzing test harnesses tends to be very manual/time intensive and a limiting factor in fuzzing effectiveness, so this could be a big deal if it works at scale.
CI Spark: LLM-Powered AI-Assistant for Creating Tests 15x Faster
Code Intelligenceâs Khaled Yakdan describes CI Spark, a new product feature that leverages LLMs (optionally augmented by existing test code) to auto-generate test cases so their fuzzer gets higher code coverage. It can automatically identify fuzzing candidates (public functions that can be entry points) and generate code in JS/TypeScript, Java, and C/C++.
Content Moderation
Using GPT-4 for content moderation - By including a detailed policy in your prompt.
Reverse Engineering
Down the Rabbit Hole: Unusual Applications of OpenAI in Cybersecurity Tooling
Eugene Lim discusses his experiments with using OpenAI not just for human-based attacks like phishing and misinformation, specifically: reverse engineering assembly, analyzing Metasploit payloads, code reviews (e.g finding XSS), etc.
moyix/gpt-wpre:
Tool by Brendan Dolan-Gavitt that utilizes the text-davinci-003 model to generate a summary of an entire binary using decompiled code from Ghidra. It employs a recursive approach to create natural language summaries of a function's dependencies, providing them as contextual information for the function itself in an attempt to get around token limitations.
JusticeRage/Gepetto
A Python script by Ivan Kwiatkowski which uses OpenAI's gpt-3.5-turbo and GPT-4 models to provide meaning to functions decompiled by IDA Pro and rename variables.
MayerDaniel/ida_gpt
IDAPython script by Daniel Mayer that uses the unofficial ChatGPT API to generate a plain-text description of a targeted routine. The script then leverages ChatGPT again to obtain suggestions for variable and function names.
ant4g0nist/polar
A LLDB plugin that queries OpenAI's davinci-003 language model to explain the disassembly, by Chaitanya.
G-3PO: A Protocol Droid for Ghidra
Tenable's Olivia Lucca discusses using large language models to assist reverse engineers and introduces G-3PO, a Ghidra script that offers high-level explanatory comments for decompiled function code.
trailofbits/Codex-Decompiler:
A Ghidra plugin by Akshat Parikh that utilizes OpenAI's models to improve the decompilation and reverse engineering experience. It can:
Take the disassembly from Ghidra and feed it to OpenAI's codex model to decompile the code
Attempt to find vulnerabilities, generate a description, or decompile the Ghidra pseudocode using OpenAI
mahaloz/DAILA:
A decompiler-unified plugin by Zion Basque that leverages the OpenAI API to enhance your decompilation process by offering function identification, function summarisation and vulnerability detection. The plugin currently supports IDA, Binja and Ghidra.
JetP1ane/Callisto:
Callisto is an automated binary vulnerability analysis tool created by James B. It utilizes Ghidra for decompiling the binary, Semgrep for scanning the resulting C code, and GPT-3.5-Turbo to validate Semgrep's findings and potentially identify additional vulnerabilities.
Who will AI help more, attackers or Defenders?
Who Will AI Help MoreâAttackers or Defenders?
Daniel Miessler currently believes AI will most help attackers first, but it will help defenders more in the long term.
AI will help attackers automate and scale attacks in the short term before defenders have sufficient understanding of their environments, but in the long term, AI + a companyâs context will give defenders the advantage.
How AI will affect cybersecurity: What we told the CFTC
Trail of Bitsâ Dan Guido believes AI âhas the potential to fundamentally change the balance between cyber offense and defense.â
Can a mid-level practitioner operate at an expert level with the help of AI? Our experience suggests yes.
AI is better for problems that require breadth of knowledge and where mistakes are acceptable (e.g., document this function, write a phishing email). Itâs not great at problems that require mastery and correctness (e.g., find and exploit this iOS 0-day).
Bug bounties, phishing defenses, antivirus, IDS, and attribution will be among the first fields impacted as AI confers a greater advantage to attackers in the near term.
National Security
A Retrospective in Engineering Large Language Models for National Security
Report by CMUâs Software Engineering Institute (SEI) that attempted to answer the following questions:
How might the Intelligence Community (IC) set up a baseline, stand-alone LLM?
How might the IC customize LLMs for specific intelligence use cases?
How might the IC evaluate the trustworthiness of LLMs across use cases?
The report found the following potential LLM use cases for national security: enhanced wargaming; synthetic data generation; interfacing with knowledge management systems; and writing, querying, modifying, and summarizing documents. The report focuses on document question answering and summarizing.
LLM output cannot be trusted for high-stakes applications without expert review.
âŚ
Current methods for quantitively evaluating the output of LLMs are not practical for many national security-related topics.
To Organize
A.I. and the Next Generation of Drone Warfare
The Pentagonâs Replicator initiative envisions swarms of low-cost autonomous machines that could remake the American arsenal.
OpenAI Red Teaming Network
OpenAI is looking for people to help red team new models before theyâre published.
projectdiscovery/openrisk
Tool by Project Discovery that reads nuclei (an OSS vulnerability scanner) output and generates a risk score for the host using GPT-3.
Self-enhancing pattern detection with LLMs: Our answer to uncovering malicious packages at scale
Apiiroâs Eli Shalom and Gil David describe how they combine a representation of a target packageâs logic, capability analysis (e.g. does the package write files, perform HTTP requests, âŚ), comparison to known malicious packages, clustering, and more to detect malicious packages at scale. Examples of malicious PyPi packages found.
mrwadams/attackgen
By Santanderâs Matthew Adams: A tool that leverages LLMs and MITRE ATT&CK to generate tailored incident response scenarios based on user-selected threat actor groups and your organization's details.
How to automate API Specifications for Continuous Security Testing
Escapeâs Younes Haddou describes a project in automatically generating an OpenAPI specification from source code, using Semgrep to extract routes and parameters and an LLM to infer the types of parameters.
CI Spark: LLM-Powered AI-Assistant for Creating Tests 15x Faster
Code Intelligenceâs Khaled Yakdan describes CI Spark, a new product feature that leverages LLMs (optionally augmented by existing test code) to auto-generate test cases so their fuzzer gets higher code coverage. It can automatically identify fuzzing candidates (public functions that can be entry points) and generate code in JS/TypeScript, Java, and C/C++.
Stable Audio - Create music with AI.
Sounds.Studio - A modern music production platform, powered by machine learning.
Summit - Your AI powered life coach.
Delphi - Creators, Coaches, & Experts can use Delphi to create a digital copy of themselves to talk with their fans 24/7/365 on any platform.
Google DeepMind testing âpersonal life coachâ AI tool - âThe project will use generative AI to perform at least 21 different types of personal and professional tasks, including life advice, ideas, planning instructions and tutoring tips.â
DeepMindâs cofounder Mustafa Suleyman: Generative AI is just a phase. Whatâs next is interactive AI - âbots that can carry out tasks you set for them by calling on other software and other people to get stuff done.â
Instagraph - Convert text or URL into a visualized knowledge graph, by Yohei.
Salesforce AIâs Chain of Destiny prompt has the AI recursively go through the same task and attempt to improve it.
k8sgpt-ai/k8sgpt
A tool for scanning your Kubernetes clusters, diagnosing, and triaging issues in simple English.
NSA, FBI, and CISA Release Cybersecurity Information Sheet on Deepfake Threats
Now that GPT-4 with vision is here, rez0 on prompt injection via image. And also Chat History Exfiltration via Image-Based Prompt Injection
pruzko/hakuin
By Jakub Pruzinec: A blind SQL Injection optimization and automation framework that uses pre-trained and adaptive language models to efficiently extract textual data from databases.mlbr3it
dropbox/llm-security
By Dropboxâs Mark Breitenbach and Adrian Wood: Scripts and related documentation that demonstrate attacks against large language models using repeated character sequences.
Do Not Give Away My Secrets: Uncovering the Privacy Issue of Neural Code Completion Tools
Academic paper in which they searched GitHub for code containing hard-coded secrets, removed the secrets, and then prompted GitHub Copilot and Amazon CodeWhisperer to complete the code, thus seeing if the models would âleakâ secrets from code they were trained on. The Registerâs overview.
Note: this happens at a fairly low percentage of the prompt attempts, and many of the generated secrets are not valid (wrong structure), so Iâd read this paper for the details.
SeaGOAT - A local search tool that leverages vector embeddings to enable to search your codebase semantically, by DĂĄniel KĂĄntor.
Magentic - Add the
@prompt
decorator to create Python functions that return structured output from an LLM.OpenAI Cookbook - Guides on how to do common tasks with LLMs.
Podcast about AI podcasting (text â audio, other languages, âŚ), featuring Wondercraft.ai.
Introducing Mozilla.ai: Investing in trustworthy AI
ChatGPT can now browse the Internet
ChatGPT can now see, hear, and speak - âSnap a picture of a landmark while traveling and have a live conversation about whatâs interesting about it.â
It can also convert Figma designs into working React components.
AutoGen: Enabling next-generation large language model applications
This open source project by Microsoft looks potentially hugely powerful. âAutoGen enables complex LLM-based workflows using multi-agent conversations. AutoGen agents are customizable and can be based on LLMs, tools, humans, and even a combination of them.â
Siqi Chen: How to get GPT4 to teach you anything
"Teach me how works by asking questions about my level of understanding of necessary concepts. With each response, fill in gaps in my understanding, then recursively ask me more questions to check my understanding."
Mike Crittenden: Atomic habit building with ChatGPT
âImagine I want to develop the habit of [insert the desired habit here]. Can you provide creative ideas for each of the Four Laws of Behavior Change? Specifically, suggest a cue that will remind me to start the habit, a way to make the habit attractive and create a craving, a method to make the habit easy to perform as a response, and a reward that will make the habit satisfying.â
LinkedIn thread: On ChatGPT doing code review/SAST
This post by Chris Romeo has some interesting discussion.
Sequoia argues that GPU capacity is getting overbuilt, with some napkin math around the cost of GPUs, the energy cost of running them, data center spend, etc.
John Hwang argues that vector database is not a separate database category, and that all incumbent databases will add this functionality, which will also be good for end users (use the same software, donât need to move data around).
Sam Altman Is the Oppenheimer of Our Age - Fascinating long profile on Sam Altman, his career, his family, and more.
Video: RT-X and the Dawn of Large Multimodal Models: Google Breakthrough and 160-page Report Highlights
10 ChatGPT Vision examples: SaaS dashboard screenshot â code, explain this workflow diagram, break down human cell diagram for a 9th grader, âŚ
CommandBar Copilot - AI assistant that can walk users through workflows on your app and even complete them automatically.
DeepUnitAi - Automatically generate Jest (TypeScript) unit tests.
Google and others have been doing this for writing fuzzing test harnesses, but I think auto-generating security-related unit tests and similar seems promising.
memgraph/odin
By Katarina Supe et al: A plugin that integrates Large Language Models (LLMs) into Obsidian using LangChain, enabling users to generate knowledge graphs and questions from Markdown files, among other features.
Cloudflare launches new AI tools to help customers deploy and run models
âBuild, deploy and run AI models at the network edge.â Theyâre also partnering with Hugging Face, and Cloudflare will become the first serverless GPU partner for deploying Hugging Face models. Products:
Workers AI - Access physically nearby GPUs hosted by Cloudflare partners to run AI models on a pay-as-you-go basis.
Vectorize - A vector database.
AI Gateway - Provides metrics to enable customers to better manage the costs of running AI apps.
Deep fakes of Tom Hanks are being used to scam people. Is nothing sacred?!
From AI with love: Scammers integrate ChatGPT into dating-app tool - Fake dating profiles are being created on Tinder, Bumble, and a dozen other dating apps. They can send likes, read replies from potential matches, and create varied believable profiles. Deep dive.
Researchers Tested AI Watermarksâand Broke All of Them
A research team found itâs easy to evade current methods of watermarkingâand even add fake watermarks to real images.
Chatbot Hallucinations Are Poisoning Web Search
Chatbotâs hallucinate things â that text is published online â search engines index it and serve it up as facts. GenAI could make search harder to trust.
Extreme Parkour with Legged Robots - Definitely not going to be used for military purposes, right? đ
Language models can explain neurons in language models
OpenAI has used GPT-4 to automatically write explanations for the behavior of neurons in LLMs and have released a dataset for every neuron in GPT-2.
Explainability is key for alignment - as models get smarter/more capable, this will help us ensure theyâre doing what we want them to do, and arenât intentionally deceiving us. Basically this is like doing a brain scan of an LLM.
Charity Majorâs 𧾠on leveraging LLMs
Charity argues that modern apps are very complex, with nondeterministic outputs and emergent properties, so that you need to debug code by instrumenting and observing in production.
There's nothing new about tight feedback loops and observability-driven development. LLMs are simply on the far end of a spectrum that has become ever more unpredictable and unknowable.
The hardest part of software has ALWAYS been running it, maintaining it, and understanding it -- not writing it. But this was obscured for a long time by the upfront difficulty of writing code.
Generative AI turns this upside down, and makes it clear where the real costs lie.
Multi-modal prompt injection image attacks against GPT-4V
Simon Willison walks through several image-based prompt injection attacks on GPT-4 Vision, including visible written instructions, exfiltrating data, and visually hiding the prompt injection.
A framework to securely use LLMs in companies Part 3: Securing ChatGPT and GitHub Copilot
Sandesh Anand and Ashwath Kumar discuss both broad principles and specific guidelines in using ChatGPT and GitHub Copilot securely in a business.
Like Microsoft, Google Cloud will assume responsibility for any legal risks, if youâre challenged on copyright grounds due to using Duet AI or Vertex AI.
New: Terraform can now auto-generate test files for private modules using generative AI
Harvard and University of Oxford researchers are harnessing AI to predict threatening new strains of COVID-19 and other viruses. It successfully predicted the most frequent mutations and dangerous variants of SARS-CoV-2.
Open questions for AI engineering
Blog post version of Simon Willison's AI Engineer Summit keynote. I liked the two questions he likes to ask himself of any new technology:
What does this let me do that was previously impossible?
What does this let me build faster?
Simon argues that ChatGPT ultimately helps programmers by flattening the learning curve and rapidly getting you to an 80% solution, even if youâre not familiar with the language.
Also, TIL you can extend ChatGPT Code Interpreter by uploading Python dependencies (that it can then use), or a JavaScript or Lua interpreter, for example. đ¤Ż
berylliumsec/nebula
By @berylliumsec_: An AI-Powered ethical hacking assistant that lets you provide testing instructions in natural language (e.g. âscan the top 10 ports of this IPâ). In autonomous mode, you can provide a list of targets and it will automatically scan then try to exploit any discovered vulnerabilities. Currently integrates nmap, OWASP ZAP, Crackmapexec, and nuclei.
Daniel Miessler and Rez0: Hacking with AI (Ep. 24)
Great discussion with Justin Gardner, rez0, and Daniel Miessler on using LLMs and Agents for: code review, code understanding, web app pen testing and bug bounty, integrating them into tools like Burp or Caido, and more.
Deezer: Streaming service to detect and delete 'deepfake' AI songs
Deezer has developed tech that allows it to identify and potentially delete songs that clone pop stars' voices.
As far as I can tell, weâre currently in a big grey area period re: copyright on AI writing, art, and music trained on and/or mimicking known artists. See also: Margaret Atwood Reviews a âMargaret Atwoodâ Story by AI.
Inject My PDF: Prompt Injection for your Resume
Some companies use automated screening tools to filter or rank resumes. Kai Greshake shares a tool that injects invisible text into your resume PDF to make any AI LLM that reads it think you are a perfect candidate. Honestly, for a security role, as the hiring manager Iâd take this as positive signal đ¤Ł
Three key questions potential users/buyers of LLM-based apps should ask
Great LinkedIn post by Dropzone AIâs Edward Wu:
Is my private data being used to train a global model that the entire customer base would use?
What FMaaS (foundational model as a service) providers do you utilize? Do they see my private data? If yes, what's their data retention and privacy policy?
What data are you using to improve your offering?
TabbyML/tabby - A self-hosted coding assistant. An open-source and on-premises alternative to GitHub Copilot.
7 examples + prompts of highly detailed DALL-E 3 images by Chase Lean.
ReactAgent - An LLM Agent that uses GPT-4 to generate and compose React components from user stories.
NSA Director Rob Joyce shares a meme about NSA + AI
And it involves Taylor Swift đ Apparently itâs Meme-tober.
ARPSyndicate/puncia
By Ayush Singh: A subdomain & exploit hunter powered by AI. Basically a wrapper around two APIs: Subdomain Center (uses Apache's Nutch, Calidog's Certstream, OpenAI's Embedding Models) & Exploit Observer (also uses Apache Nutch, which is a scalable, production-ready Web crawler).
This new data poisoning tool lets artists fight back against generative AI
A tool called Nightshade lets artists add small, pixel-level changes to their art to poison models trained on it- e.g. cause images of âdogsâ to have too many limbs or cartoonish faces or look like cats instead. This attack would require tech companies to painstakingly find and delete each corrupted sample.
Analyzing the Security of Machine Learning Research Code
NVIDIAâs Joe Lucas shares findings from analyzing the 140GB of source code released in the Meta Kaggle for Code dataset, using manual analysis, TruffleHog, and Semgrep.
Primary findings: plaintext credentials, insecure deserialization (using pickle instead of ONNX), typos (packages could be typosquatted), and lack of adversarial robustness (not using tools like Adversarial RobustnessToolbox (ART), Counterfit).
Joe also released lintML, which wraps TruffleHog and Semgrep, and does other checks.
Greg Rutkowski is one of the most common names included in AI-generated art prompts due to his beautiful fantasy artwork.
MonsterAPI: A new platform that allows users to fine-tune open source LLMs without writing any code.
Air.ai: AI agents for sales and customer service reps. âCan have 10-40 minute long phone calls that sound like a real human, with infinite memory, perfect recall, and can autonomously take actions across 5,000 plus applications. It can do the entire job of a full time agent without having to be trained, managed or motivated. It just works 24/7/365.â
Javi Lopez prototyped a working pumpkin-themed Angry Birds clone using only Midjourney/DALL-E 3 for art and GPT-4 for the code. He shares the prompts and code.
petrgazarov/salami
By Petr Gazarov: A declarative domain-specific language for cloud infrastructure based on natural language descriptions. Uses GPT-4 to convert the natural language to Terraform.
How to Get Samantha from Her or TARS from Interstellar on Your iPhone/Android
Daniel Miessler on how OpenAI added high quality voices to ChatGPT and how you can easily start a 2-way conversation with iOS shortcuts.
OpenAI Dev Day Announcement
OpenAI announced a truly epic amount of things at their recent conference.
New GPT-4 Turbo model that is more capable and supports a 128K context window.
GPT-4 and 3.5 are 2X - 3X cheaper.
You can use GPT Vision, DALL-E 3 and text-to-speech via API.
ChatGPT: knowledge cut-off is now April 2023, you can use all of the extensions (DALL¡E, browsing, and data analysis) without switching between them.
You can attach files to let ChatGPT search PDFs and other document types.
I feel a great disturbance in AI, as if millions of OpenAI thin wrapper start-ups suddenly cried out in terror and were suddenly silenced.
New Assistants API - support for building agents that have goals and can call models and tools.
Theyâve launched GPTs, essentially an app marketplace for developers to build and charge for custom versions of ChatGPT that combine instructions, extra knowledge, and any combination of skills.
I highly recommend watching the keynote.
AI Cameras Took Over One Small American Town. Now They're Everywhere
404Mediaâs Joseph Cox describes how Fusus, a system for linking a townâs security cameras into one central hub and adding AI to them, has spread across the country. Fusus apparently allows integrating basically any camera feed, and then they overlay functionality on it. Surveillance state ftl đď¸
gblues/aws-ml-opt-out
A Terraform module to opt out of AWS AI/ML data collection.
fr0gger/Awesome-GPT-Agents
Thomas Roccia has compiled a list of >100 GPT agents focused on cybersecurity (offensive and defensive), created by the community.
The Offensive ML Playbook
A database of offensive ML TTPâs, broken down by supply chain attacks, offensive ML techniques and adversarial ML by Adrian Wood. It aims to simplify how to target ML in an organization, and includes examples like poisoning an LLMâs ground truths, how to put malware in a model and distribute it, and more.
Introducing AI-powered application security testing with GitHub Advanced Security
GitHubâs Asha Chakrabarty and Laura Paine announce previews for three AI-powered features:
AI-generated fixes for JavaScript and TypeScript alerts within pull requests (only 7 months after Semgrep Assistant was launched that also auto-recommends fixes đ).
Detecting generic secrets that donât follow a standard regex.
English â regex to make writing custom secret detections easier.
Grammarlyâs new GenAI feature can learn your style and apply it to any text, which is it learns passively as you use their product.
Is It Cheating If Sheâs a Sex Bot? GQ asking the important questions. Watch out for bots commenting on your posts who later try to scam you.
depot/depot.ai: An open-source Docker registry that allows easy integration of the top 100 public machine learning models from Hugging Face into your Dockerfile, using tools like BuildKit and eStargz for optimal image building and lazy-loading support.
PatentPal: âGenerative AI for Intellectual Property. Automate mechanical writing in your patent applications.â
Docus.ai: Talk to an AI Health Assistant, generate your health report, validate it with a doctor from the US & Europe.
Postwise: âYour personal AI ghostwriter, trained on engaging, viral content.â
Bulletpapers: AI papers, summarized by AI (yo dawgâŚ)
Morise.ai: Helps you come up with content ideas, titles, descriptions, tags, community posts.
LinkedIn has launched an AI job coach that can help job seekers see if certain roles are a good fit, research companies, shape their profiles for the best shot at a position, and prepare for interviews.
TikTok launched an AI âmeme makerâ thatâs brutally roasting users. And thatâs not even what the CCP officers are saying about your videos.
continuedev/Awesome-DevAI: Repo with links to resources about using LLMs while building software.
Pressure Testing GPT-4-128K With Long Context Recall - Great empirical testing by Greg Kamradt, who found that GPT-4âs recall performance started to degrade above 73K tokens, less context = more accuracy, and facts placed at the very beginning and 2nd half of the document seem to be recalled better.
Why We'll Have AGI by 2025-2028
Daniel Miessler argues that AGI wonât be a single model or component, but rather a system of agents that focus on different tasks and coordinate to achieve a goal, like an organization within a company.
Daniel predicts a 60% chance of AGI in 2025 and a 90% of AGI in 2028, where AGI is âAn AI system capable of replacing a knowledge worker making the average salary in the United States.â
Thanks for reading to the end!
If there are other topics youâd like me to add to this list, please reach out, Iâd love to hear from you đ