Ashish Nadar, Scholar — Thesis: Secure Systems Should Be Explainable

Secure Systems Should Be Explainable

Scholar Ashish Nadar argues that secure, operable software is not a set of tools - it is a set of repeatable decisions, artifacts, and controls that remain defensible as systems scale.

The Thesis in Practice
01/05
Architecture
Five foundational principles from Ashish Nadar's scholarly research that transform theory into operational reality
01
ArchitectureShould be documented as decisions with trade-offs (not tribal knowledge)Explore Now  
02
RiskShould be evaluated with consistent scoring and owned mitigationsExplore Now  
03
GovernanceShould be readable and implementable: policies and procedures that engineers actually useExplore Now  
04
OperationsShould produce learning artifacts: runbooks, post-incident reviews, and measurable improvementsExplore Now  
05
AIShould be introduced as an engineered system: evaluated, monitored, and access-controlledExplore Now  
Research QuestionsGuiding the Work
Open scholarly inquiries by Ashish Nadar that drive continuous methodology refinement
Qn.1.What minimum governance produces maximum reliability and safety?
Qn.2.How can CI/CD enforce policy without slowing delivery?
Qn.3.Which operational signals best predict incidents before they happen?
Qn.4.How can AI accelerate development while reducing—not increasing—security risk?
Qn.5.What makes AI features auditable, safe, and maintainable in production?
CommitmentsScholar Ashish Nadar's principles that ensure this work remains grounded, transparent, and useful
Publish reusable artifacts (templates, checklists, decision records)
Prefer evidence and measurable outcomes over claims
Update methods based on production learning and audit findings