Day 59 攻撃者をステークホルダーとしてデザインする Design with Attacker as Stakeholder
Day 59 攻撃者をステークホルダーとしてデザインする
すべてのセキュリティ決定における"見えない声"
昨日、同僚に聞かれた。
「セキュリティプロジェクトのステークホルダーって誰?」
私は答えた。
「......攻撃者です。」
その瞬間、空気が止まるのを感じた。
けれど、それが揺るぎない現実だ。
私たちは普段、ユーザー、開発者、プロダクトマネージャー、経営陣、コンプライアンス担当者といった顔が見える人たちを思い浮かべる。
でも、実は最も重要なステークホルダーが抜けている。
攻撃者だ。
空席の椅子

すべてのセキュリティ設計会議には、空席がある。
攻撃者は招待されない。予算も持たない。ガバナンスのプロセスにも従わない。でも、必ずやってくる。
そして、攻撃者には非対称な優位性がある:
- 防御側はすべての脆弱性を守らなければならない
- 攻撃者はたった一つの穴を見つければいい
Schneier (2008) はこう書いている:
「セキュリティには特別なマインドセットが必要だ。セキュリティの専門家は、店に入ると万引きの方法を考えずにはいられない。コンピューターを使えば、セキュリティの脆弱性を想像する。新しい技術を耳にすれば、どうやってそれを悪用するかを考える」
これはパラノイアではなく、プロフェッショナルとしての失敗モードの想像力だ。
攻撃者目線の設計とは何か?
過去10日間 (Days 48-58)、私たちは習慣形成、意図的な練習、そしてセキュリティを自動的にする環境デザインについて探求してきた。
今日は、その基盤となる原則に立ち返る:
攻撃者の視点を組み込まずに、セキュリティ環境も安全な習慣も設計できない。
Beautement et al. (2023) は、27人のセキュリティ専門家へのインタビューから「セキュリティマインドセット」の3つの特徴を特定した:
- 積極的な敵対的思考 (proactive adversarial thinking)
- 体系的なワーストケース分析 (systematic worst-case analysis)
- 前提への挑戦 (assumption challenging)
つまり、機能を設計するとき、こう問う:
- 攻撃者はどうやってこれを悪用するだろうか?
- 最悪のシナリオは何か?
- どの前提を攻撃者は崩すだろうか?
この思考こそが、Security By Design = Security By Attack の核心だ。
「守る設計」だけでは足りない。
「攻撃される前提の設計」で、はじめて完成する。
実例: パスワードリセット設計の2つの世界
同じ機能を、攻撃者がステークホルダーとしていない場合といる場合で比較してみよう。
攻撃者がステークホルダーでない設計
ユーザーフロー:
- 「パスワードを忘れた」をクリック
- メールアドレスを入力
- システムがリセットリンクを送信
- ユーザーが新しいパスワードを設定
ステークホルダー: ユーザー、開発者、プロダクトマネージャー
結果: シンプルで使いやすいが、根本的に脆弱。
攻撃者がステークホルダーである設計
では、攻撃者に発言権を与えたらどうなるか? 攻撃者はすぐに4つの脆弱性を指摘する:
攻撃者の声 1: ユーザー列挙 (User Enumeration)
「『このメールアドレスは登録されていません』と『リセットリンクを送信しました』で応答が違う? じゃあ、有効なユーザーアカウントを列挙できるね。標的型フィッシング攻撃のリストが手に入る」
脅威: OWASP Top Ten (2021) は情報漏洩を重大な脆弱性としてリスト化している。
攻撃者の声 2: 予測可能なリセットコード
「リセットトークンが連番だったり、タイムスタンプベース? なら総当たり攻撃で他人のアカウントに侵入できる」
脅威: Bonneau et al. (2012) は、弱い認証トークンがパスワードリセットの最大の脆弱性だと指摘している。
攻撃者の声 3: レート制限なし
「同じアカウントに無制限にリセット要求を送れる? じゃあDoS攻撃でユーザーの受信箱を爆破できるし、総当たり攻撃も可能だ」
脅威: Mirkovic & Reiher (2004) は、レート制限のないエンドポイントがDoS攻撃の主要な標的になると示している。
攻撃者の声 4: パスワード変更後もセッション継続
「被害者がパスワードをリセットしても、俺のセッションは有効なまま? 最高じゃないか。侵入し続けられる」
脅威: OWASP Top Ten (2021) は、セッション管理の不備を認証の失敗カテゴリに含めている。
再設計されたシステム (攻撃者がテーブルにいる場合)
攻撃者の視点を組み込むと、設計は次のように変わる:
対策 1: ユーザー列挙を防ぐ
どのメールアドレスに対しても、常に同じメッセージを返す。「このメールアドレスが登録されている場合、リセットリンクを送信しました」。攻撃者はアカウントの存在を確認できない。
対策 2: DoSを防ぐ
同じIPアドレスから短時間に何度もリクエストが来た場合、それ以上の処理を行わない。受信箱の爆撃も、総当たり攻撃も阻止する。
対策 3: 予測不可能なトークン
リセットトークンは、NIST SP 800-90A Rev. 1に準拠した暗号学的に安全な乱数生成を使う。256ビットの完全にランダムなトークン。有効期限は15分。攻撃者が推測することは不可能だ。
対策 4: 複数チャネル通知
リセットが要求されたら、メールとSMSの両方でユーザーに通知する。本人が知らないリセット要求は、すぐに検知される。
対策 5: ワンタイムトークン
リセットリンクは一度使ったら無効になる。攻撃者が古いリンクを再利用することはできない。
対策 6: すべてのセッションを無効化
パスワードが変更されたら、すべてのデバイスで強制的にログアウトさせる。攻撃者が既に侵入していても、その瞬間にアクセスを失う。
結果:
同じ機能。同じユーザー体験。でも、今度はデフォルトで攻撃に耐性がある。
経営の言葉で言うと
IBM Security (2023) の Cost of a Data Breach Report は、553組織・16か国・17業界を分析した結果、セキュリティを開発に統合した組織は、後付けで対応する組織と比べて平均176万ドルのコスト削減を実現していることを示している。
コストの内訳:
- 設計段階での脆弱性発見: 開発者1時間 ≈ $100
- コードレビュー段階での修正: 再作業4時間 ≈ $400
- 本番環境での侵害対応: インシデント対応、通知、罰金、風評被害 ≈ $4,500,000
コスト比: 約 45,000 対 1
攻撃者に設計テーブルで席を与えることは、侵害を待つよりも圧倒的に安価だ。
これは、ソフトウェアエンジニアリングの古典的な知見 (Boehm & Basili, 2001) とも一致する: 欠陥はライフサイクルの後段ほど、指数関数的に修正コストが高くなる。セキュリティ脆弱性の場合、そのステークはさらに高い。
脆弱性の現実
National Vulnerability Database (NVD) は現在、20万件以上の既知の脆弱性を記録しており、毎年数千件が追加されている (NIST, 2025)。
それぞれの脆弱性は、攻撃者の視点が設計会議から欠けていた瞬間を示している。
最後に
すべてのセキュリティ会議には、空席がある。
その席は攻撃者のものだ。
彼らの声を想定して設計するか、侵害という形で語らせるか、選択は、この二つしかない。
招待しなくても、彼らは座る。
だから私たちは、先に席を準備する。
セキュリティは、攻撃者と共に設計する。呼ばなくても来るのだから
今日、私は攻撃者をセキュリティ設計のステークホルダーとして扱うべきだという概念を紹介した。
明日は、もっと深く踏み込む。研究によれば、攻撃を直接体験することは、測定可能な方法でセキュリティ行動を変える (Dalgaard et al., 2023)。攻撃を学んだ者が、最も優れた防御者になる理由を探っていく。
-----------------
Design with Attacker as Stakeholder
The Missing Voice in Every Security Decision
Yesterday at work, someone asked me about stakeholders in our security project.
The question stopped me.
Not because I didn't know the answer--I've managed security projects for years. But as I stood there thinking about security stakeholders--users, developers, management, compliance teams--I realized something uncomfortable:
We're missing the most important stakeholder.
Over the past weeks, we've explored how to design environments that make security automatic--systems that guide rather than depend on willpower (Days 55-58). We've examined how habits form in the brain and how judgment becomes intuitive through practice (Days 47-54). But today, we need to address a fundamental design principle that underpins all of this:
Who sits at the table when we make security decisions?
The Empty Chair

Imagine a security design meeting.
Around the table sit:
- Product managerswho want features shipped quickly
- Developerswho want maintainable code
- Userswho want seamless experiences
- Business ownerswho want competitive advantage
- Compliance officerswho want regulatory adherence
Everyone has a voice. Everyone's concerns are heard. Compromises are negotiated.
But there's an empty chair at the table.
No name placard. No agenda items. No scheduled speaking time.
This is the attacker's chair.
The attacker wasn't invited. Doesn't have a contract. Won't attend budget meetings. Has no quarterly KPIs.
But they will come anyway.
Without invitation.
Without permission.
Without compromise.
And unlike every other stakeholder who might accept "good enough" when resources are constrained, the attacker never compromises. Never negotiates. Never stops searching.
The Stakeholder Who Never Appears on the List
Effective decision-making requires understanding all stakeholder perspectives--their interests, constraints, and potential conflicts.
But in security, we systematically exclude our most adversarial stakeholder.
Why?
Because it's uncomfortable. Because they're hostile. Because acknowledging them forces us to confront an asymmetry we'd rather ignore:
Traditional Stakeholders
- Need to succeed most of the time
- Have limited resources and time
- Accept "good enough" solutions
- Play by the rules and internal politics
- Can be negotiated with
The Attacker
- Needs to succeed only once
- Have unlimited patience
- Never accept anything less than complete compromise
- Make their own rules
- Cannot be negotiated with (internal excuses won't work)
This asymmetry is precisely why the attacker's perspective must be prioritized in security design. This isn't merely a practical observation--it's grounded in security economics. As Anderson (2001) demonstrates in his foundational work on security engineering, this "defender's dilemma" creates fundamental challenges: defenders must protect against all possible attacks, while attackers need only find one successful path.
What "Design With Attacker as Stakeholder" Means
This isn't about paranoia.
This isn't about assuming everyone is malicious.
This isn't about building fortress systems that nobody can use.
This is about systematic adversarial thinking.
When we design a password reset feature, we must ask:
- "How would an attacker abuse this?"(not just "How will users use this?")
- "What's the worst that could happen?"(not just "What's the happy path?")
- "What am I assuming that attackers will challenge?"(not just "What do requirements specify?")
Security expert Bruce Schneier captures this mindset perfectly: "Security requires a particular mindset. Security professionals... can't walk into a store without noticing how they might shoplift. They can't use a computer without wondering about the security vulnerabilities. They can't hear about a new technology without trying to figure out how to subvert it" (Schneier, 2008, p. 1). This isn't paranoia--it's professional imagination of failure modes.
This connects directly to what we've learned about environmental design in Days 55-58: just as we design environments to make good security habits automatic, we must design systems to resist abuse by default.
Research from Oxford's Journal of Cybersecurity identifies what they call the "security mindset"--a cognitive approach that systematically considers how systems can be exploited (Beautement et al., 2023). Their study, based on interviews with 27 security professionals across industry and academia, identified three core characteristics:
First, proactive adversarial thinking: Security professionals spontaneously anticipate malicious behavior before it occurs, not as an afterthought. This mirrors the proactive environmental design we discussed in Day 57--thinking ahead about what could go wrong.
Second, systematic worst-case analysis: Asking "what's the worst that could happen?" becomes standard practice, not pessimism. This is disciplined imagination of failure modes.
Third, assumption challenging: Questioning every implicit trust boundary. Where non-security professionals see established systems, security professionals see untested assumptions--much like how we questioned assumptions about willpower versus environmental design in Day 55.
The study found that this mindset significantly correlates with the ability to identify vulnerabilities before exploitation (Beautement et al., 2023). Those without this mindset build systems that work as intended. Those with this mindset build systems that resist abuse.
Design as if the attacker has a seat at your table. Because they do, whether you acknowledge it or not.
A Simple Example: The Two Password Reset Designs
Let me show you what "attacker as stakeholder" looks like in practice.
Design Without Attacker as Stakeholder
User Flow:
- User forgets password
- User enters email address
- System sends reset link
- User sets new password
Stakeholders consulted:
- Users: "We need an easy way to recover access"
- Developers: "This is straightforward to implement"
- Product: "We need this to reduce support tickets"
- Attacker: [not invited]
Result: Clean. Simple. Usable. Fundamentally insecure.
Design With Attacker as Stakeholder
Now we add the empty chair. The attacker "speaks":
First vulnerability: "Thank you for this feature. I can figure out which email addresses have accounts in your system because you tell me 'user not found' versus 'reset email sent.'" This is called user enumeration--a well-documented vulnerability in OWASP's Top 10 Web Application Security Risks (OWASP Foundation, 2021).
Second vulnerability: "I notice your reset codes follow a predictable pattern. I can guess valid codes for other users." Research on password reset mechanisms shows that predictable tokens can be exploited through statistical analysis or brute force attacks (Bonneau et al., 2012).
Third vulnerability: "There's no limit on how many times I can try. I can send millions of reset requests to overwhelm your system." Denial-of-service through resource exhaustion has been a fundamental attack vector since the early 2000s (Mirkovic & Reiher, 2004).
Fourth vulnerability: "When users reset their password, they stay logged in on all their devices. Even if they realize I broke into their account and change the password, I keep my access." Session management failures remain in OWASP's critical vulnerability list, representing a primary attack vector (OWASP Foundation, 2021).
The Redesigned System
After listening to the attacker, here's what changes:
Always give the same response:
- Whether the email exists or not, the system says: "If this email exists, we sent a reset link"
- Now attackers can't test which emails are valid
Limit how many requests one person can make:
- Block anyone trying hundreds of password resets
- Stops attackers from overwhelming the system
Create completely random, unpredictable reset codes:
- No patterns, no sequences
- Impossible for attackers to guess
Alert users through multiple channels:
- Send email with the reset link
- Send text message: "Someone requested a password reset"
- If it wasn't them, they'll know immediately
Make reset links work only once:
- After it's used, it can never work again
- Even if an attacker finds an old reset link, it's worthless
Log out all devices when password changes:
- Phone, laptop, tablet - everything signs out
- If an attacker was logged in, they lose access instantly
Same Feature. Completely Different Security.
Every protection exists because we asked: "How would an attacker try to break this?"
- Generic messages→ Attackers can't discover which emails have accounts
- Rate limiting→ Attackers can't overwhelm the system with requests
- Random codes→ Attackers can't predict or guess reset links
- Multi-channel alerts→ Real users get warned immediately
- One-time use→ Old leaked links become useless
- Force logout everywhere→ Attackers can't maintain access after password changes
This is what happens when the attacker gets a seat at the design table.
The feature works exactly the same for legitimate users. But now it resists attack.
Why This Matters: The Economics of Early vs. Late Security
The 2023 IBM Cost of Data Breach Report, analyzing data from 553 organizations across 16 countries, found something striking: organizations with security integrated into development saved an average of $1.76 million per breach compared to those treating security as an afterthought (IBM Security, 2023).
The cost differential is dramatic:
Finding a vulnerability during design: Approximately one hour of developer time, costing around $100.
Finding the same vulnerability during code review: Approximately four hours of rework, costing around $400.
Finding the vulnerability in production: Incident response, breach notification, regulatory fines, legal costs, and reputation damage--averaging $4.5 million per breach.
The cost ratio is approximately 45,000 to 1. Giving the attacker a seat at the design table is orders of magnitude cheaper than waiting for them to break down the door.
This aligns with decades of software engineering research showing that defects become exponentially more expensive to fix as they move through the development lifecycle (Boehm & Basili, 2001). Security vulnerabilities follow the same pattern--but with higher stakes.
The attacker is a stakeholder who will never compromise, never negotiate, and never stop trying.
This isn't a metaphor. This is the fundamental reality of security work, validated by decades of vulnerability research. The National Vulnerability Database (NVD) maintained by NIST documents over 200,000 known vulnerabilities, with thousands added annually (National Institute of Standards and Technology, 2025). Every single one represents a moment when the attacker's perspective was absent from the design conversation.
Today's principle--designing with attackers as stakeholders--is the foundation that makes all of this work. We can't design secure environments if we don't understand what we're defending against. We can't build systems that guide users toward security if we don't know how those systems will be attacked.
Tomorrow, we'll explore something remarkable: research showing that experiencing attacks firsthand changes security behavior in measurable ways.
Remember this always
There's an empty chair at every security design meeting. It belongs to the attacker. They will show up whether we invite them or not. The only question is: Do we design with their voice in mind, or wait for them to speak through a breach?
ーーーー
References 出典・参照
Anderson, R. (2001). Security engineering: A guide to building dependable distributed systems. Wiley.
Barker, E., & Kelsey, J. (2015). Recommendation for random number generation using deterministic random bit generators (NIST Special Publication 800-90A Rev. 1). National Institute of Standards and Technology. https://doi.org/10.6028/NIST.SP.800-90Ar1
Beautement, A., Sasse, M. A., & Wonham, M. (2023). The security mindset: Characteristics, development, and consequences. Journal of Cybersecurity, 9(1), Article tyad010. https://doi.org/10.1093/cybsec/tyad010
Boehm, B., & Basili, V. R. (2001). Software defect reduction top 10 list. Computer, 34(1), 135-137. https://doi.org/10.1109/2.962984
Bonneau, J., Herley, C., Van Oorschot, P. C., & Stajano, F. (2012). The quest to replace passwords: A framework for comparative evaluation of web authentication schemes. In 2012 IEEE Symposium on Security and Privacy (pp. 553-567). IEEE. https://doi.org/10.1109/SP.2012.44
Dalgaard, J. C., Janssen, N. A., Kulyuk, O., & Schürmann, C. (2023). Security awareness training through experiencing the adversarial mindset. In NDSS Symposium on Usable Security and Privacy (USEC 2023). Internet Society. https://doi.org/10.14722/usec.2023.237300
Freeman, R. E. (1984). Strategic management: A stakeholder approach. Pitman.
Grassi, P. A., Garcia, M. E., & Fenton, J. L. (2017). Digital identity guidelines (NIST Special Publication 800-63-3). National Institute of Standards and Technology. https://doi.org/10.6028/NIST.SP.800-63-3
IBM Security. (2023). Cost of a data breach report 2023. IBM Corporation. https://www.ibm.com/reports/data-breach
Mirkovic, J., & Reiher, P. (2004). A taxonomy of DDoS attack and DDoS defense mechanisms. ACM SIGCOMM Computer Communication Review, 34(2), 39-53. https://doi.org/10.1145/997150.997156
National Institute of Standards and Technology. (2025). National vulnerability database. https://nvd.nist.gov/
OWASP Foundation. (2021). OWASP top ten 2021. https://owasp.org/www-project-top-ten/
Schneier, B. (2008). The security mindset. Schneier on Security. https://www.schneier.com/blog/archives/2008/03/the_security_mi_1.html
Schneier, B. (2000). Secrets and lies: Digital security in a networked world. Wiley.