This document summarizes 25 stakeholder interviews held across six weeks. It assumes working knowledge of the words and phrases (ATO, Controls, etc.) used among federal IT security compliance professionals.
- No lingua franca. Many of the agencies we spoke with employed different words and definitions when discussing risk management; what it meant to “ATO a system” ultimately depended on whom we asked. Nearly all of our interviews required establishing a shared vocabulary at some point. For example, one agency described an internal category of software as “GSS tools;” and one vendor described anything agencies did to meet a control as a “component.”
Disparate mental models. Risk management requires that an agency thoroughly consider the ways in which its information systems and information practices (what the late professor Ursula Franklin collectively called “Technology as Practice”) relate to potential threats. It’s a holistic process. Yet many interviewees describe a culture in which much of their internal documentation is narrowly scoped because it is predominantly created and maintained for external stakeholders: auditors, OMB, IGs, etc.
Further, the authoring of compliance documentation is often delegated to security professionals with a limited perspective of the agency’s business. Thus compliance documentation only shares one side of the story. (We heard “Security doesn’t know the business needs. They frame security in terms of anything that’s going to get them in trouble.”) With no internal source of truth, agency staff often struggle to understand what risk actually means to the agency.
Everyone wants a map. Many interviewees relied on visual metaphors to describe what was missing: People talked about “mapping” controls to policies, the need for a single “point of reference,” and working to get a sense of the “bigger picture.” Two agencies we heard from relied on spreadsheets to “map” controls into families with inheritance. In both cases we asked how this mapping came about, if it was sanctioned by the CIO, and how newcomers or outsiders were taught to use this mapping. In neither case was the map championed as a canonical point of reference.
Workflows are often local and/or proprietary. Many interviewees described workflows which primarily relied on files stored on local media: Word-processing documents, spreadsheets, etc. stored on hard drives and shared over email or Sharepoint. Though this practice works well enough, it unintentionally foregoes the benefits of collective data practice. If an agency were to pool all of its control data (currently trapped in SSPs) in one location, agency staff could then use that data to better understand how their agency thinks about risk. (We heard “Being able to pass around the data in security packages is critical. One of our critiques of FedRAMP is that everything is in Word documents. We don’t want to pay someone $200 an hour to transcribe data from one format to another.”)
Some larger organizations assess risk reductively… One interviewee described how, in working through the ATO process for their respective systems, two offices within the same department came up with their own incident response plans. The department eventually became aware of the duplicative efforts and created a blanket incident-response policy — but this wasn’t done by design. Relatedly, we heard that the quality of ATOs (that is, the quality of risk-assessment practices) can vary not only across agencies but within them as well.
… while smaller agencies have fewer degrees of freedom. One of the biggest takeaways for us — maybe obvious in retrospect — is that smaller agencies are often underserved: The small agencies with whom we spoke have security teams of 1-3 people, and correspondingly small budgets. Smaller agencies are also more appreciative of prescriptiveness. They’re more comfortable relinquishing direct control over their system boundary (through the use of shared services).
Nearly every agency we spoke with makes use of a “general support system” (GSS) to account for things like data transport, data storage, a “control plane,” organizational rules, and enterprise apps. This is usually operated as if it were a single “system,” under a single ATO. At least two agencies we spoke with are actively building applications on top of their GSS.
Automating compliance is a huge, vague goal. Interviewees suggested many different uses of automation which extended well beyond SSPs. In short, there’s no agreement about what to automate; instead, many people tend to engage with the benefits of ATO reuse. This is a very different goal. The most thought-provoking approach we heard began with standardizing not controls or templates, but the way in which agencies structure and format their policies.
Compliance tools draw on familiar metaphors. Some SSP-writing tools are designed to provide contextually relevant boilerplate copy, just like contract-writing tools. Another tool we heard about “resembles Turbo Tax” (that is, it provides a wizard interface). One vendor we met provided “open source supply chain management.” Relatedly, many aspects of risk management feel analogous to test- and behavior-driven development.
B. What’s working
Agencies take risk seriously. Everyone we interviewed took seriously their duty to secure and protect federal information systems. One small agency we talked to isn’t subject to FISMA reporting requirements, but worked just as hard to follow the spirit of the RMF.
RMF is universally viewed as an improvement over its predecessors. Interviewees said that RMF requires fewer “checkboxes for checkboxes’ sake” than Department of Defense Information Assurance Certification and Accreditation Process (DICAP), and encourages more high-level, strategic thinking. RMF’s emphasis on continuous monitoring is also seen as a positive because it encourages a more proactive approach to risk management.
Agencies leverage technical efficiencies. One of the things we heard that challenged (but didn’t entirely disprove) our initial hypothesis (that “compliance burden is independent of security maturity”) was that, for some agencies, the longer they focused on compliance the easier it has become. A few agencies are creatively building upon previously authorized systems to inherit controls and ultimately reduce their compliance burden.
Compliance prompts collaboration. Risk management raises intersectional concerns, bringing together executives, program managers, developers, etc. Management learns more about things on the ground, and folks on the ground learn more about the concerns of the organization as a whole.
FedRAMP is well regarded within government. One person described FedRAMP as a “get out of jail free” card, a sentiment corroborated by the small agencies we spoke with. What’s more, many agencies format their ATO packages similar to FedRAMP’s in order to be seen as more legitimate and trustworthy.
C. What’s counterintuitive
What starts out as good advice becomes mandatory. NIST publishes compilations of what is basically good advice and then asks agencies to determine the degree to which they need to heed that advice. But rather than heeding a subset of advice (“tailoring”), agencies generally take a maximalist approach out of fear of reprisal from the Office of Management and Budget, their Office of Inspectors General, and Congress.
Does a website need an ATO? Time and again, we heard that what constitutes an information system was “intentionally vague.” Fair enough. But what really caught us off guard was when someone said “Well, you shouldn’t be ATO’ing a website, anyways.” 18F primarily builds web sites and web applications, and worked with the FEC over the last few months to assess the new FEC.gov.
Though required, the value of compliance documentation is hotly contested. Compliance documentation could ostensibly serve as a useful starting point for agency staff looking to better understand their organization’s approach to risk management — but no one we talked to said this. One executive described compliance documentation as “worthless” for determining what was actually going on on the ground. (Related: See the opportunity “Research the onboarding experience of security professionals.”)
Agencies approach risk management reductively, but suffer from problems that are holistic. People pay taxes and give up certain freedoms so that the government can account for holistic goods — things like labor conditions and air quality. These are shared goods for which a reductive/market approach doesn’t make sense.
As agencies increasingly provide and make use of shared services, a reductive approach to risk management may not make the most sense; shared services increasingly implicate the entire federal government. Yet agencies are not really encouraged to conduct shared risk assessments, and no agency “pays taxes” or “gives up freedoms” to make holistic programs (like FedRAMP) work.
Roles are over-specified and under-achievable. We heard the RMF “assumes you have 14 different roles at your agency,” but (again) most small agencies we talked to only have a staff of 1-3 people. Many of our interviews centered around the fact that, just like system boundaries, roles were poorly defined. Despite advice to the contrary, people are often forced to wear multiple hats.
D. Pain points
A culture of fear. Many interviewees described adhering to processes and observing norms not out of benefit to their agency but out of fear of reprisal from their IGs, OMB, and Congress. Indeed, “risk management” typically refers to managing risks from external threats, but our interviews suggest that a large portion of the risk agencies manage comes from “threats” within the federal government. Many interviewees described foregoing potentially innovative approaches to compliance out of fear for being reprimanded.
Agencies are reticent to share their compliance packages. Related to a culture of fear is a culture of secrecy. Agencies looking to expedite their compliance processes will often look to other agencies to see if they can leverage prior work (“who’s done this before?”). But we heard this is difficult to do because agencies disagree as to the privacy of their compliance packages. (Agencies also disagree as to what constitutes a sufficient risk assessment.) In the rare cases where agencies were willing to share their compliance packages, they did so off the record, in person. (We heard “Everyone thinks that they’re unique. They don’t want to tell you what they’re doing even though everyone basically already knows what they’re doing.”)
ATOs are occasionally granted under duress. Many agencies employ legacy systems for which authorizing officials didn’t have a choice as to whether or not they “wanted” to accept risk. We also heard stories where risk was accepted to help secure a political win or to meet a production deadline. In one interview, we heard a story where an office wanted to quickly authorize cloud-based software in order to process non-sensitive data. The vendor said they counted a dozen or more federal agencies as customers, but the office couldn’t find a single one who had either conducted an assessment or was willing to share their assessment.
Continuous monitoring portends a growing compliance burden. Many executives worry that their compliance burden will only increase over time as a consequence of continuous monitoring. We also heard that vendors working with multiple agencies will need to respond to monthly compliance-related inquiries for each federal customer. As their customer base grows, so does their compliance burden.
The role of third-party auditors. One executive lamented that the incentives simply don’t align for the hiring of third-party auditors. In this person’s view, auditors are perpetual newcomers to the agency, they need to understand the agency’s risk posture, policies, systems, etc. before they can add any value. Further, in this person’s view, auditors are encouraged to find problems (to stay on contract longer) instead of actually assessing the accuracy of compliance documentation.
CIO Council meetings are potentially not as effective as they could be. As mentioned above, many aspects of the IT compliance space suffer from a sense of pending culpability. One interviewee shared with us how the people charged with holding agency CIOs accountable, OMB, attends CIO Council meetings. It follows that CIO Council meetings might not be seen as a safe spaces to air dirty laundry.
People struggle to understand FedRAMP’s value proposition. While it’s clear that FedRAMP is valued within government — agencies increasingly list the program as part of their evaluation criteria for cloud-service providers, and many agencies use the program’s templates to add legitimacy to their SSPs — FedRAMP is also seen as struggling. Both agencies and vendors wish that FedRAMP provided more case studies and best practices; and, as far as we can tell, the level of effort and cost of the program increase linearly with the number of systems in its portfolio.
Vendor time-to-market. Although we spoke to a very limited pool of vendors, the ones we did talk to struggled to understand the scope of the compliance burden they faced in seeking to do business with government. How is this different than financial compliance, or HIPAA compliance? How long will this take? How much will this cost?
A focus on people. The culture of federal IT compliance is largely defined by transactional norms (e.g., FISMA reporting) and rules of deference (e.g., to OMB) rather than reciprocity and here-and-now humility. While everyone works in federal IT compliance of their own volition, no one points to a “standard” career path — everyone has a different background and a unique perspective.
We see in this complex milieu a huge opportunity to empower people by way of shared knowledge and experiences. This might involve celebrating the stories people use to make sense of federal IT compliance, or it might involve better identifying goals for shared task performance. To borrow an example from professor Edgar Schein, members of a team running a relay race think about task performance much differently than members of a surgical team working within an operating room.
Onboarding. This is directly related to the first opportunity. Many of our interviews left us wondering What’s it like to get up and running as a government security professional? Or How do security professionals make sense of their agency’s risk posture? Many of the people we spoke with described learning on the job rather than referencing any kind of agency threat model, map, or training course.
In the same vein as an earlier 18F project, we see an opportunity to conduct stakeholder interviews and contextual inquiries to understand the onboarding experience(s) of government security professionals. Findings from that research could inform, for example, the creation of a CISO Handbook (perhaps in a format similar to the 18F Handbook).
Civil servants expressed a need for:
Coherent guidance. Many interviewees asked for a map, or a single starting point in order to make sense of risk management — as it applies to both one agency as well as the government as a whole.
A set of materials to help transition from DICAP to RMF. What are the critical points to think through? How do older processes map onto newer ones?
Examples of the RMF in action. Many interviewees asked if we could help them identify de facto examples of completed system security plans and/or policies agencies need to have in place.
Case studies of innovative approaches. One agency we spoke to saw big gains from instituting a peer-review process for compliance packages rather than a top-down review process. Another agency employs a “CIO governance” team to help manage and triage their workflow. We see an opportunity to use case studies to celebrate and memorialize innovative approaches.
More opportunities to receive direct feedback. Perhaps NIST or OMB could offer office hours or provide a way for agency CIOs to submit anonymous questions.
Vendors expressed a need for:
Materials for understanding federal compliance from the outside. Common questions we heard from vendors were: What are our options (FedRAMP PATO, Agency ATO, etc.)? How does this compare with other compliance regimes like HIPAA? How long will this take? What’s expected of us once we have a federal customer?
Define the privacy considerations for compliance documentation. One white paper we read argues that collectively defining privacy causes groups to critically assess and feel more comfortable engaging with the processes by which privacy is defined (and this article argues “laying out the transmission principles for given situations may encourage people, both as individuals and collectively, to share more and attain greater good.”). For our purposes: What if agencies had simple heuristics by which they could determine when it was okay to share compliance and security documentation? A shared sense of social norms might encourage agencies to speak more freely about their security and compliance work. When is it okay for an agency share its SSPs with another agency? When is it okay for an agency share its SSPs with FedRAMP?
Ratification for authorization. In order to promote ATO reuse, agencies could jointly devise a way to assess systems so that their authorization is broadly agreed upon. The joint-authorization board (JAB) may be a useful model for this. Relatedly, If agencies can broadly agree on an “ATO” process, then there’s also an opportunity to estimate how long it might take for any given agency to implement and assess controls for that SaaS. (This feels tantamount to automating the POA&M)
Metrics. How many ATOs are completed in a given year? How well are agencies doing them? The answer (again) depends on whom you ask. Defining metrics around the ATO process could help agencies find internal bottlenecks and compare themselves with one another. (Related: GSA’s Lightweight ATO process and NGA’s ATO-in-a-day process may be worth introducing at other agencies, but only if the metrics and processes by which they operate are seen as valuable to other agencies.)
Infrastructure as code, and vice versa.
Machine-readable standards for control documentation. OpenControl has already seen significant adoption, and could prove really valuable with the help of an SSP-building tool like Compliance Masonry. What if we built a tool that takes in SSPs as Word Documents and outputs OpenControl?
OSCAL, OpenSCAP, etc. Our discovery surfaced (to us) a number of current and future metadata standards that might be helpful for security professionals. We see big opportunity here as well.
Feeds. Depending on the privacy considerations of compliance documentation (see opportunity #4), it’s not difficult to imagine a future where agencies might broadcast continuously updated feeds of compliance related information at predictable URLs. Imagine accessing system metadata at systems.agency-name.com/fisma-id, (in human-readable and machine-readable formats), or policy data at policies.agency-name.com/policy-id (in human-readable and machine-readable formats). A related thought experiment: What if policies read like integration tests?
Visualize system boundaries. Many interviewees asked for a tool to visualize how applications work together and where system boundaries begin and end.
Controls. Many interviewees like the idea of ATO reuse but also disagree as to the processes by which ATOs are granted. Instead of focusing on the entire ATO (process), we see an opportunity to promote control reuse.
Selection. Many executives asked for a tool to more “scientifically” determine which controls are applicable to an information system.
Strategies. In absence of selection criteria, some interviewees asked for strategies to work through a set of controls. For example, one person we talked to wondered if there were a way to represent controls hierarchically (based on whether they were likely to be inheritable, easy to implement, etc.) such that some controls could be seen as more foundational, others more ancillary. Another suggested a phased approach to meeting baselines: Implementing a subset of controls first (perhaps even during development), reflecting on what they’ve learned, and using their knowledge to inform how they go about meeting the rest of their controls before launch.
Implementation guidance. Interviewees expressed a desire to implement controls in a way that are observable, measurable, and repeatable.
Contracts. Many agencies bake compliance requirements directly into their contracts for information systems. We see an opportunity to conduct lexical analysis on this data to help the government determine how agencies are commonly describing compliance requirements. Cross-referencing these descriptions with systems authorized for operation could provide a sense of the relative success of specifying compliance requirements upfront.
FISMA reporting data may provide useful source data for AI tools to automatically determine which controls apply to a system.
An anonymous retro for the CIO Council. We see an opportunity to run an anonymous retrospective with the CIO Council to allow members to safely share what’s working well, what’s just okay, and what should change. We also heard interest in running an “ATO anonymous” event.
Clarify FedRAMP’s value proposition to a heterogeneous set of users.
A few interviewees (both within and outside government) said they struggled to make sense of FedRAMP’s content. This sounds like an opportunity to rethink the program’s content/outreach strategy (see earlier recommendations for more coherent guidance).
FedRAMP is increasingly mentioned in procurement evaluation criteria, but agencies aren’t able to reuse “FedRAMP ready” software without some scrutiny. How might we help agencies determine the level of effort required to conduct their own evaluations?
Some agencies (and vendors) find the concept of continuous monitoring daunting. So long as agencies and vendors can agree on the details, this sounds like a big opportunity for FedRAMP to help convene centralized continuous monitoring across cloud service providers.
F. Notable quotes
“In many cases what I hear when an organization says something like ‘let’s adopt a cloud technology,’ the first question from the CIO is ‘who’s doing this?’ Once they understand there’s only one agency doing this, their next question is ‘How much do I trust that that agency is doing their due diligence with regard to their ATOs? Does their CIO understand the level of risk they’re accepting?’ Honestly, and I’m not trying to talk bad about anybody, some CIOs are just cut from a different cloth.”
“[As to your question on institutional knowledge,] the corporate culture of an agency will be baked into a number of documents, but ATO paperwork isn’t one of them. An ATO is basically a long list of controls, mitigations to those controls, and artifacts showing we’re actually doing things. The institutional knowledge and corporate culture will be baked into other documents, largely policy documents. It’s going to be the policy of an agency that says “we will or we will not accept risk from…” It’s also in your strategic plans and other high-level stuff. Maybe… turnover binders? I don’t think there’s actually a single document or a group of documents aside from policy that’s going to help a CISO understand what it’ll take to do their job.”
“There’s a key cultural difference between developers (who often embrace amateurs) and security people (who are often expert first responders) that’s not mentioned in the RMF.”
“So the first issue [with the RMF is that] time is not a first-class citizen. The second issue is that it doesn’t identify the amount of resources you have available as a constraint; [the RMF] literally assumes you have 14 different roles. The third issue is that it is explicitly defined in the RMF that it’s only meant as guidance for organizations to define their own risk-management operation — but I’ve yet to meet an agency outside of [agency] that has translated this into their own operational compliance program.”
“That’s something missing from the NIST framework — and I don’t know how you could re-design it — but scope is not in there. Like, how many users are accessing the system? What’s the actual amount of data in it? How often is it accessed? [...] And if there’s one piece of moderate data in there, then the whole system is moderate. [They don’t really allow you to say ‘if we limit access, then it dramatically reduces the threat. So we’re just going to focus on these access controls and not backup or recovery because there’s 10 people and we can use some other service for that.’] There’s no leveraging economies of scale.”
“I have three ATOs, all at the moderate level. Well, actually, I have one ATO with three security boundaries, all at the moderate level. Those security boundaries actually contain systems that have lower security requirements. For example, I have [system name], but not everything in [that system’s] security boundary is controlled at that level. Certainly not everything needs that level of security. I have one ATO letter, though. I try and avoid paperwork.”
“We spend a lot of time figuring out what’s defensible. Like I said, we’ve got really smart IT compliance people here [...] some of the best people I’ve worked with in the whole industry. But instead of being able to just do what we think is best, a lot of times we’re like ‘Yeah, I know this is what’s best. But how are we going to defend this from outside auditors?’”
“[Does an ATO case study exist?] That’s a good question, but no. I think if I had to write something publicly I think I would tell the story of a more straightforward process than what actually exists [...] The last time I wrote things down, I reorganized how we worked.”
“FedRAMP does a really good job of defining the packages and selecting the controls: what constitutes low, moderate, and high [baselines] [...] Even if I questioned FedRAMP’s approach, I would suggest folks use them so the government speaks in one voice rather than thousands of unique voices. Before FedRAMP, all the packages were different. Since then, all *cloud* packages look the same. If I had one piece of advice to you, it’d be to extend that to *all* systems.”
“[18F is an example of an office that’s constantly asking to authorize apps.] So being able to normalize these processes is important. If a PM wants to try out new software, there should be a straightforward process. There currently isn’t, which makes it hard for us to say ‘we’re going to streamline this and make it easier for everyone!’”
“My suggestion would be to cross-walk the journey from submitting the package to approving the package to allowing for continuous authorization. Basically, how do we make it collaborative so I don’t get to the end to find out that I haven’t done it correctly?”
“The only sustainable approach for small agencies is to leverage Software as a Service from Cloud Service Providers.”
“[We didn’t know how to approve limited-use, cloud-based software as a service on a quick turnaround, so we basically did a similar assessment to what we’d do for one of our internal tools: We filled out an abbreviated SSP and ensured that there was no data stored that shouldn’t be in there. Then we checked the national vulnerability database and called it good… but this is going to be a big challenge going forward.]”
“With security, it’s hard to be proactive. It’s like whack-a-mole. Focusing on documentation can cause you to be reactive.”
[Compliance is scattered.] “We rely on NIST to create the standards, OMB to creation actionable policies, and DHS to enable the operation of those standards.”