• tl;dr sec
  • Posts
  • [tl;dr sec] #278 - North Korean IT Workers, How Sentinel One Defends Itself, How Threat Actors Use Claude

[tl;dr sec] #278 - North Korean IT Workers, How Sentinel One Defends Itself, How Threat Actors Use Claude

Several posts on DPRK IT workers infiltrating companies, Sentinel One on fighting off threat actors, Anthropic shares how attackers were using Claude

Hey there,

I hope you’ve been doing well!

❤️‍🩹 Sharing Vulnerabilities

Now that the bedlam of RSA is over, I wanted to share a bit more about my BSidesSF keynote, “Sharing Vulnerabilities.”

Subtitle: “Emotional intelligence isn’t just a coping strategy for the logically impaired, and other lessons I should have learned earlier.” 😅 

To be honest, I was very much unsure of how the talk would be received, as it’s primarily about sharing true stories, including about my childhood, friendships, and family.

At the end, I gave people permission to come up and give me a hug or share their story with me over the course of BSidesSF. And so many people did! It was very moving 🥹

The talk was drawn live on a poster board by the very talented Silent James, see the web version of this issue for a picture of it.

If you watched the talk and liked it, feel free to tell me how the talk landed on you, what it made you think of, or want to do differently, or really anything at all 🤗 

A number of people have already reached out about it, and it’s meant a lot 🙏 

If you haven't seen the talk, I'll share when it's live on YouTube in a few weeks!

Sponsor

📣 Protect Your Google Workspace with Purpose-Built Security

Your Google Workspace is the backbone of your business, yet most teams use security tools that weren’t designed to protect it. Material Security changes that. Built specifically for Google Workspace, Material is a detection and response platform that protects Gmail, Google Drive, and accounts by proactively eliminating security gaps, stopping misconfigurations, and preventing shadow IT before they turn into costly problems. With real-time monitoring and automatic fixes, Material keeps your workspace secure with minimal effort, reducing human error and freeing up your team to focus on work that matters.

Google Workspaces contain a ton of sensitive info, and Material does some cool things around preventing data exfiltration (did you know you can actually rewrite old emails 🤯), detecting attacker persistence mechanisms, and more.

AppSec

Drag and Pwnd: Leverage ASCII characters to exploit VS Code
Portswigger's Zakhar Fedotkin describes how ASCII control characters like SOH and ETX can be exploited in modern terminal emulators, particularly in VS Code's use of node-pty. He shows how inserting these characters into Visual Studio Code's run configuration arguments or filenames during drag-and-drop can lead to command injection. See this Proof of Concept test project if you want to try the attack yourself. These techniques have been integrated into the Active Scan++ Burp Suite extension.

RCE in Tailscale, DNS Rebinding, and You
This is from 2022, but an impressive write-up. Jamie McClymont and Emily Trau describe finding multiple vulnerabilities in Tailscale, including DNS rebinding attacks against the LocalAPI and PeerAPI, leading to remote code execution.

💡 Great example of finding some initial bugs, chaining and iterating for higher impact, bypassing mitigations, etc.

Security is a Negotiation Problem
Straight up savage advice from Jonathan Price on the importance of negotiating with your security vendors, a quick overview of the basics of negotiating, and a highlight of his top security-specific negotiation strategies, including: the importance of competitive bids, value-negging, creating external constraints, pressuring sales reps by timing purchases near quarter-end, and offering to be a reference customer.

💡 Negotiating is important, and these are good tactics. Though personally, I disagree with some of the negative framing, I think most sales people aren’t evil, they’re just following incentives (Incentives Rule Everything Around Me™️), just like you and the company are when you’re negotiating your compensation. I’ve been enjoying Jonathan’s blog, I encourage you to check out other posts 👍️ 

Sponsor

📣 WorkOS: Authorization for AI Agents

AI agents and MCP servers need controlled access to specific resources.

WorkOS AuthKit provides scoped OAuth access and built-in role-based access control, so you can assign roles, set permissions, and control exactly which agents or services can access specific tools.

AuthN/AuthZ with Agents and MCP is the wild west right now, great to see a company working on this 👍️ This area is going to be big in the near future.

Cloud Security

TrailAlerts: Take Control of Cloud Detection in AWS
Adan Alvarez announces TrailAlerts, an open-source, serverless cloud detection tool that allows users to define Sigma rules for custom alerting on AWS CloudTrail events. It’s lighter weight than a SIEM, and more customizable than GuardDuty.

How it works: CloudTrail logs are stored in S3, a CloudTrail Analyzer Lambda processes new logs using Sigma rules, matching events are sent to an SQS queue, then an Event Processor Lambda consumes messages from the queue and sends alerts via SNS or SES. The post gives example use cases of detecting IAM admin persistence and detecting multiple secret accesses.

Google Cloud Account Takeover via URL Parsing Confusion
Mohamed Benchikh describes finding a URL parsing confusion when handling redirect_uri OAuth parameter, specifically between the gcloud CLI utility and Chrome. Attack flow: the attacker crafts a malicious OAuth authorization request and sends the link to the victim, the victim sees a totally legitimate OAuth authentication flow for the Google Cloud SDK client, but after they login and consent to the permissions they get redirected to the attacker’s website, who can then use the OAuth grant to perform API calls on the victim’s behalf.

💡 Google has fixed this issue, but I think the write-up is interesting because of the methodology discussion of using fuzzing to find parser differences. Mohamed wrote a Python script that mutated different URLs, applying various encoding tricks, alternate notations, and edge-cases (the post lists examples in section 5B). If Google’s gcloud OAuth flow had an issue like this, I’m sure similar issues abound elsewhere.

Secure Cross-Account Access is Tricky. Four Common Dangerous Misconceptions
Token’s Eliav Livneh explores four major misconceptions and the reality about AWS IAM cross-account trust policies, and how they can create dangerous privilege escalation pathways, including:

  • A role trusting an external principal extends the risk of its entire account to include the external principal’s risks.

  • When you trust an external principal, you effectively trust their entire account.

  • The “billing” management account is sensitive. If it’s compromised, an attacker can take control of every other AWS account in the org.

  • Allowing a less secure account to assume a role in a more sensitive account creates an easy privilege escalation path for attackers.

Shadow Roles: AWS Defaults Can Open the Door to Service Takeover
Aqua Security's Yakir Kadkoda and Ofek Itach describe how default AWS service roles can lead to service takeovers and privilege escalation. Specifically, the AmazonS3FullAccess managed policy is dangerous because various other AWS services use S3 to store files, and if you can modify them → code execution, lateral movement, etc. The post gives case studies across SageMaker, Glue, and EMR. AWS responded by reducing default privileges in a number of places.

Also, TIL SageMarker automatically runs inference.py, even if trust_remote_code=False, because the flag only controls Hugging Face’s Transformers loading – not SageMaker’s model serving behavior. So that setting does not disable remote code execution from a malicious model 🫠 

Blue Team

What It Takes to Defend a Cybersecurity Company from Today's Adversaries
SentinelOne’s Tom Hegel, Aleksandar Milenkoski and Jim Walter discuss multiple real-world attacks against SentinelOne, including DPRK IT workers posing as job applicants (“This vector far outpaces any other insider threat vector we monitor.”), ransomware groups attempting to access their EDR platform for testing malware (selling access via underground markets, spinning up fake companies and buying EDR from a reseller), and Chinese APTs targeting their supply chain.

“Our team has tracked roughly 360 fake personas and over 1,000 job applications linked to DPRK IT worker operations applying for roles at SentinelOne — even including brazen attempts to secure positions on the SentinelLabs intelligence engineering team itself.”

💡 This is an excellent post- security vendors are indeed often a target, and it’s awesome to see Sentinel One talking about it, sharing both what they’re seeing and how they’re defending against attacks, such that others can benefit. More of this please! 👏 🙏 

Hello 0-Days, My Old Friend: A 2024 Zero-Day Exploitation Analysis
Google Threat Intelligence Group’s Casey Charrier, James Sadowski, Clement Lecigne, and Vlad Stolyarov analyze 2024 zero-day exploitation trends. Great post, love the figures, discussion of various threat actors, and walk through of example 0-days.

  • They tracked 75 zero-day vulnerabilities exploited in the wild in 2024, a decrease from 2023 (98 vulns).

  • Commercial surveillance vendors appear to be increasing their operational security practices, potentially leading to decreased attribution and detection.

  • Attackers are increasing their focus on security and networking products- they identified 20 security and networking vulnerabilities, over 60% of all enterprise technology 0-days. (Ivanti, Palo Alto Networks PAN-OS, Cisco Adaptive Security Appliance)

  • Mobile and browser vulnerabilities dropped, likely due to exploit mitigations raising the cost of exploitation. 🤘 

  • For the first year ever, we also attributed the exploitation of the same volume of 2024 zero-days (five) to North Korean actors mixing espionage and financially motivated operations as we did to PRC (China)-backed groups.”

Red Team

django-88/NomadScanner
A hardened, memory-only Windows port scanner built for red teamers and penetration testers who need maximum stealth and OPSEC. It sends fully in-memory HTTP probes with randomized network characteristics to blend into normal traffic patterns.

Linux Kernel Exploitation - CVE-2025-21756: Attack of the Vsock
Great technical write-up by Michael Hoefler on exploiting a Linux kernel vulnerability in vsock, including analyzing the patch, bypassing AppArmor, defeating kASLR, and more. PoC here.

io_uring Rootkit Bypasses Linux Security Tools
ARMO’s Amit Schendel describes a major blind spot in Linux runtime security tools caused by the io_uring interface—an asynchronous I/O mechanism that bypasses traditional system calls. Most tools, including Falco and Microsoft Defender, fail to detect rootkits using io_uring because they rely heavily on syscall monitoring.

They’ve released Curing, a proof-of-concept rootkit that operates fully via io_uring. Potential detection strategies: monitoring anomalous io_uring usage, leveraging Kernel Runtime Security Instrumentation (KRSI), and finding alternative kernel hook points.

AI + Security

OWASP Agentic Security Initiative (ASI) - Insecure Agent Samples
The OWASP Top 10 for LLM Apps repo has examples of insecure code and/or security misconfigurations for 6 common Agent frameworks, including: LangChain, LangGraph, CrewAI, AutoGen, OpenAI Swarm, and Amazon Bedrock Agents.

atiilla/sqlmap-ai
By Atilla: An AI-powered wrapper around SQLMap that makes SQL injection testing more accessible and automated. Includes automated result analysis and next step suggestions, as well as adaptive step-by-step testing with DBMS-specific optimizations and WAF bypass.

mbhatt1/GhostLine
By Manish Bhatt: Feed it a number. Your cloned voice does the social engineering, while you sip your coffee. It uses a FastAPI server, ngrok tunnel, Twilio, ElevenLabs, and lets you customize the persona’s conversation flow. And he prototyped this in ~3 hours 🤯 

💡I got to chat with Manish a few times during RSA week, and I was impressed by his intelligence and kindness. Highly recommend following his blog and tools.

How AI services power the DPRK’s IT contracting scams
Okta’s Liam Demody et al describe their research into how North Korean actors are leveraging GenAI tools to create convincing personas and gain remote technical jobs globally ("DPRK IT Workers"), including: using AI-enhanced services for managing multiple personas (multiple mobile phone, instant messaging, and email accounts, and other related chat services behind a “single pane of glass”), translating communications, generating and critiquing CVs and cover letters, conducting mock interviews, and bypassing automated checks.

“The scale of observed operations suggests that even short-term employment for a few weeks or months at a time can, when scaled with automation and GenAI , present a viable economic opportunity for the DPRK.”

💡 You should take this seriously. For me, “hiring North Koreans” seemed very fantastical / happens to other people in the news, but I had two different friends during RSA week alone say they were dealing with this. You might be too. Good luck.

Detecting and Countering Malicious Uses of Claude: March 2025
Anthropic shares a number of case studies of malicious uses of Claude, including:

  • A professional 'influence-as-a-service' in which Claude was used not just for content generation, but also to decide when social media bot accounts would comment, like, or re-share posts from authentic social media users. Claude was used as an orchestrator, deciding what actions social media bot accounts should take based on politically motivated personas. Full report.

  • Enhancing an actor’s systems for identifying and processing exposed usernames and passwords associated with security cameras, while collecting information on Internet-facing targets to test the credentials against.

  • A recruitment fraud campaign leveraged Claude to enhance the content of scams targeting job seekers in Eastern European countries.

  • An individual actor with limited technical skills developed malware that would typically require more advanced expertise.

  • See also Monitoring computer use via hierarchical summarization.

Misc

Relationships

AI

Politics

If I took a shot of non alcoholic beer every time the U.S. gov’t recently had poor OpSec or made a bad security choice, I’d be constantly blacking out 🤦 

✉️ Wrapping Up

Have questions, comments, or feedback? Just reply directly, I’d love to hear from you.

If you find this newsletter useful and know other people who would too, I'd really appreciate if you'd forward it to them 🙏

Thanks for reading!

Cheers,
Clint
@clintgibler