- tl;dr sec
- Posts
- AppSec as Glue: Building Partnerships to Scale Security
AppSec as Glue: Building Partnerships to Scale Security
Answers to additional audience questions from this BSidesSF 2025 panel on scaling security impact by building essential partnerships across teams
Panelists: Ariel Shin, Jacob Salassi, Mukund Sarma, Tad Whitaker, Jeevan Singh, Sarah Liu (Moderator)
Introduction
At BSidesSF 2025, this group of Application Security leaders got together for the panel AppSec as Glue: Building Partnerships to Scale Security, where we discussed how to scale security impact by building essential partnerships across teams such as platform engineering, compliance, threat detection, and audit! The discussion was great, including spicy takes, and we captured several remaining audience questions to follow up after the excitement settled down.

Table of Contents
1. What do you think is the biggest gap or risk in AppSec that remains today?
[Mukund]
There isn’t a one-size-fits-all answer. AppSec risks are often deeply company-specific, shaped by the organization's architecture, culture, and risk appetite. That said, a recurring and under-addressed challenge across many companies is handling business logic vulnerabilities. These are often subtle, context-driven flaws in how features are designed and interact, and they rarely show up in scanners or during standard code reviews.
As an industry, we’ve made strong progress on common vulnerabilities - many of which can now be prevented through secure defaults or caught via static/runtime analysis. But business logic flaws remain a major gap. These are highly contextual issues that don’t show up in scans and are tough to teach or catch without a deep understanding of how a product actually works.
While embedding with teams is ideal, it’s a privilege not every AppSec team can afford. Many teams operate with limited headcount, must juggle competing priorities like compliance or incident response, or simply don’t have the bandwidth to sit in on every sprint or design review. Instead, finding ways to stay close - joining key discussions when possible, building trust with engineers, and giving pragmatic, context-aware feedback - can still go a long way. The more we understand the product and its risk surface, the better we can help teams make smart, secure decisions.
[Jeevan]
Most companies are not able to build a basic Application Security program and their biggest risk is resourcing their AppSec teams appropriately.
I have chatted with dozens of companies about their Security Development Lifecycle (SDL) and most of the leaders don’t have a vision or roadmap because they are in a perpetual state of dealing with fires. The leaders are so short-staffed that they have to do a lot of the operational work (vulnerability management, threat modeling, security incidents, etc) themselves. They are only able to plan and execute on short term vision and they are not able to build a robust SDL.
Mature teams chart their path, focus on the SDL and reduce their risks. They build a secure paved path for engineers and they fully integrate security tools into their environment. Most importantly, they get their Engineering counterparts to fix security issues.
Quick tips to get engineers to fix security issues:
Metrics – have correct and clean vulnerability data and share it with your leadership team on a regular cadence, it is important for them to know how well (or poorly) the Engineering org is doing to fix security issues.
Speak their language – build your Vulnerability Management program where the Engineers spend their time, if they are used to fixing tickets in JIRA, then create JIRA tickets. If they are used to dealing with Slack alerts, then send them Slack alerts. Do not add any friction to their workflows.
Focus on secure paved paths – it is easier to convince your leadership team to invest in framework level fixes than it is to have Engineers resolve their tickets. Spend your time thinking about the bigger picture and addressing the root cause vs dealing with the symptoms.
The biggest gap is being able to do the basics, which is a hard problem to solve without having resources.
2. Let's assume AI gets to the equivalent performance level of human pentesters (or better). Where do you think it will be most valuable and best used within an enterprise?
[Jacob]
I think the best case is you get 10x human pentesters as a result. The overarching theme of penetration tests is adversarial thinking. LLMs need millions of examples to approximate domain specific reasoning, and there is no reason to believe commercial (or any) LLMs have relevant training data sets. Even if we get great training data, I firmly believe it is a “co-pilot” situation like we see working well for developers in Cursor.
But that's not really the end of it, there will be better SAST, better DAST, and better tools that can explore deeper into code faster than a human can do today.
TLDR - I don't see it as a monolithic “AI pen-tester”, I see the same collection of tools operated by the same persona getting much much better, and making individual pen-testers much better as a result. We are already seeing examples of this with Semgrep, Co-pilot, Burp AI and Caido.
[Mukund]
The biggest impact won’t come from replacing pentesters, but from making them dramatically more effective. Think of it less as a standalone "AI pentester" and more as a smarter toolbox that scales what a human can do, faster, deeper, and more continuously.
Some of the most valuable use cases might include:
Triage and validation of bug bounty or pentest findings to confirm exploitability and reduce noise
Enhancing static and dynamic scan results by chaining issues and identifying meaningful attack paths
Continuously testing key assumptions like authorization boundaries or input constraints as code evolves
Performing context gathering and impact analysis when a new vulnerability is reported
Auto-generating remediation: opening PRs with proposed fixes where a known patch or mitigation exists
Making existing tools (SAST, DAST, fuzzers) significantly faster and more precise
AI will shine in the repetitive, high-volume areas thereby freeing humans up to focus on creativity, risk judgment, and systems thinking.
3. What are some techniques to infuse more charisma into an AppSec program?
[Ariel]
My top 3 techniques to infuse more charisma into an AppSec program are: Hiring, Recognition, and Real World Examples.
Hiring: Seek out folks who have an aptitude for or who have a proven track record of establishing strong relationships with engineering and influence teams to embrace and reduce risk. The right people can transform how security is perceived across the organization.
Recognition: Build a culture of recognition. Either through a Slack channel, direct feedback, or a recognition tool, consistently celebrate engineers who have gone above and beyond for security — no matter how big or small. This not only makes the individual engineer feel valued, but also creates a positive culture where security work is recognized and encourages more folks to play an active role.
Real World Examples: Make security threats tangible and relevant. Security risks can feel abstract and engineers feel fatigued by the constant onslaught of new threats and attacks. Using bug bounty findings and recent headlines to illustrate how threats became real risks can demonstrate how security plays a vital role in securing companies and users. When people see the connection to real impacts, they engage more meaningfully. Just be strategic about which examples you highlight—focus on the most persistent threats at your company or ones that are often overlooked. Otherwise, this approach can backfire and just become more noise in an already noisy space.
[Mukund]
Charisma is really about approachability and influence. A program that listens, adapts and brings positive energy will always attract more buy-in than one that operates from a distance.
Be pragmatic, not dogmatic: Developers respect security folks who understand tradeoffs and give actionable advice, not just “don’t do that.”
Meet devs where they are, and work in the open: Integrate into tools they already use (Slack, GitHub, CI/CD), and share visibility into what security is doing. Open PRs, log findings transparently, give feedback in the same spaces where devs collaborate. When security feels like part of the team’s workflow and not an outside force, charisma and trust follow naturally.
Tell better stories: Don’t just say “XSS in production”; explain what could’ve happened, how you found it, and how it was fixed. People remember narratives more than CVE IDs.
4. How do you identify and partner with those in a company who are motivated to be a champion of a security project, but also are empowered and capable of achieving it?
[Ariel]
Generally identifying a potential security champion/partner is often the easy part — they're typically the engineers popping into the security channel, asking questions during training, or taking a vested interest in the security review. The challenge more often lies in building an effective and meaningful partnership. We need clear goals and roles so that both parties benefit from the partnership. I've found it helpful to establish these partnerships by addressing a few key questions:
What is the purpose of this partnership? We want to understand why a partnership would uniquely solve this problem versus other approaches. Getting clear on this upfront saves everyone time. For example, an application security engineer could partner with a product team with the goals of delivering threat models on key components of a product and a roadmap to tackle the backlog of vulnerabilities, prioritizing critical and high severity issues. Security benefits by focusing on one product instead of playing whack-a-mole across different products, while engineering gets dedicated resources to tackle persistent security issues.
What role does engineering/security play and how does their experience provide a unique offering? Each partner should bring something to the table that they're uniquely positioned to contribute. If either side is just going through the motions, the partnership won't deliver value. Security leads threat modeling sessions to identify and prioritize threats, while engineering provides critical system and business context, then translates findings into actionable remediation plans.
How do we measure the success of this partnership? We want to ensure the partnership is set up for success by setting expectations clearly and measuring outcomes. Vague goals lead to vague results. An example success criteria is measuring that 100% of critical and high vulnerabilities are remediated within SLA and joint signoff from security and engineering leadership on a security roadmap.
How long should this partnership last? We don't want this partnership to drag on indefinitely without clear outcomes. Having a timeframe or milestone-based endpoint gives everyone clarity and creates healthy urgency. Timelines can depend on product complexity – ranging from 1 month for smaller and newer products to 2 quarters for larger, legacy systems with many unknowns.
5. What's the most challenging partnership to form/maintain and why?
[Mukund]
The most challenging partnerships are with teams (or leaders) who don’t believe they’re accountable for the security of their application or product. When that sense of ownership is missing, security quickly becomes a back-burner concern - issues go unaddressed, mitigations are delayed, and the relationship becomes reactive. No tooling or automation can replace real accountability.
Changing that mindset isn’t always easy. It often depends on executive beliefs, team incentives, and the broader culture. But AppSec can influence it. The most effective security leaders are pragmatic: they frame security as part of product quality, tie it to real business impact, and only push when it truly matters. When engineering leaders know you're not crying wolf - that you’re reaching out because it’s genuinely important, they're far more likely to listen and engage. That trust is everything.
It’s also important to acknowledge the reality that most teams are constantly balancing KTLO (Keep the Lights On) work, roadmap delivery, and security debt. If you can clearly explain why something matters, how it compares to other security priorities they might already be working with you on, and what the actual risk is, you shift the conversation from “security says no” to “here’s a shared decision to make.” That builds trust and over time, accountability.
Strong partnerships are built on shared goals and mutual respect. Security isn’t something you hand off, it’s something you build in, together.
6. What do AppSec vendors most often get wrong about the need for AppSec to build bridges across an organization?
[Jacob]
AppSec vendors keep introducing tools where the buyer is security and the user is development. This is insta-friction and pain, and causes misaligned incentives. If AppSec vendors want to sell to developers, they better become credible in that space rather than just marketing themselves that way. IDE & PR integrations are ways AppSec products try to appeal to developers today, but when I think of real appeal I think of how developers RUN toward Cursor and Copilot.
Why do they run to it? Because writing code for developers directly adds value. Directly adding value is the bar today. IDE & PR integration are just table stakes.
[Jeevan]
The biggest challenge with vendors is that they believe that their tool should be in the center of the (security) world.
In order to reduce friction with the Engineering teams, tools should hook into the Engineering team’s workflows and should behave like other software development tools. Engineers want all of their scanners to comment on Github PRs and those scans should run quickly. They also want scanners to be configurable and policy-driven, the scanners should have the ability to monitor, comment or block based on Engineering preferences.
There are a number of things that engineers do not want:
Logging into another UI to find out about scan results - this adds unnecessary friction to the process, no one has the time to log into separate tools to see results.
Scans that run longer than a few seconds - there are already a number of scans running in the pipeline, the slower things run, the more likely engineers will be to bypass them.
Many tools require logging into it to update configuration - it is 2025 now, everything should be configuration-as-code and configuration changes should be in a repo.
One thing that I tell all vendors, at minimum have an API to pull data out of their tool, but ideally they should have native integrations with S3 or Snowflake to push data. Most companies have a preferred data analytics tool where they have dashboards. Every company does things slightly different, which is why it is important to push the data to the right places. While vendors can make reports pretty in their tool, the people that need to see the dashboards will never log into those tools.
Vendors should realize that the more they invest into making the process low-friction for Engineers, the stickier their product will be in the ecosystem.
7. What are some current compliance standards that mention Application Security controls specifically? FedRAMP? HIPAA? What are you doing to help the engineering org care about compliance?
[Jacob]
My experience is that compliance creates an extremely weak incentive toward secure design.
Baseline from compliance is “security review is done for every change”, but the level of variability in what constitutes a security review is massive, and totally at the discretion of the security teams. When I want an engineering org to care, I do not talk about AppSec as a compliance enabler, but instead frame the conversation around increasing developer productivity and feature velocity. This could be modeled as crawl, walk, and run:
Crawl: Go faster by completing security reviews faster.
You can get here by adding staff.
Walk: Go faster by reviewing fewer low risk things.
You can get here by modifying your existing staff's behavior and engaging developer teams.
Run: Go faster by contributing frameworks and libraries used to build the product that eliminate the need for any reviews
You will rarely get here unless you are an experienced team of SWEs shipping production code in the product org.
Most teams can achieve crawl and walk starting with a compliance mandate alone. Achieving run requires both an engineering org that makes strategic investments and talent to be credible in the space. Security teams are rarely staffed with SWEs shipping production code every day. Getting buy-in to staff it, and being positioned correctly in the org structure to deliver it requires substantial executive sponsorship.
[Jeevan]
Truthfully, if Application Security leaders have done their job right, Engineers should never have to care about compliance or specific compliance frameworks. In addition to that, security engineers shouldn’t even have to worry about compliance either.
The compliance program at your organization should be security-led vs having a security program at your organization that is compliance-led. The compliance framework that your organization adheres to should be the baseline of your program and you should strive to deliver more than the baseline.
An example is that many compliance frameworks would have a training component that requires all software developers to know about OWASP Top 10 and secure coding. The compliance framework only requires the software developers to do the training once a year.
Your developers will not know much about security if you are training them once a year. A strong security training program would have multiple touch points and many different training modules. While OWASP Top 10 is an important tool, I would much prefer that my developers are aware of:
The most frequently discovered vulnerabilities at the org
The safe places to store secrets
How to perform a secure code review
Teach them how threat actors try to break into the web application
Don’t build a security program based on compliance controls, rather build your security program on fundamentals that are important to you and have your Engineering counterparts also care about those security problems too.
Conclusion
Thank you all for your time!
Want more? Catch our panel recording on the BSidesSF Youtube channel or feel free to connect with us on LinkedIn for further discussion.