Getting a Pulse on the Success of Your Security Program

By Ningjing Gao, Senior Group Program Manager, Security PMO

Renae Kang
Adobe Tech Blog

--

The Adobe Security Program Management Office (PMO) oversees a diverse portfolio of over 20 strategic technical programs annually, each program meticulously defined with its own distinct scope, clear deliverables, and completion deadlines. Given the complexity and scale of security initiatives, which often involve many interrelated tasks and dependencies, thereby making them susceptible for delays and unforeseen changes, technical program managers (TPMs) must monitor each program’s health and success closely during the program lifecycle.

In this blog, I will share how the Adobe Security PMO team measures the effectiveness of its security programs throughout the various program phases — program initiation, planning, execution, and closing — and provide insights on how you can effectively evaluate your own programs each step of the way to amplify their impact within your organization.

Program Initiation: Defining Measurements of Success

Defining what success looks like starts from the very beginning of program initiation. At Adobe, the TPM assigned to a specific program is responsible for determining its initial scope, which includes a distinct set of deliverables, timeline and milestones, and a clear measurement of success. Success typically consists of high-level program KPIs later refined during the program planning phase to measure program performance. Both the program sponsors and key stakeholders provide input into these KPIs.

Using our past endpoint detection and response (EDR) deployment program as an example, the assigned TPM defined the program’s measurement of success as reaching a 99% EDR agent deployment rate and enabling proactive protection policy by a specific date based on the program executive sponsors’ input.

In addition to the measurements of success, the TPM should also determine the program minimum viable product (MVP), or the absolute minimum delivery required from the program. The MVP is a lower bar than the measurement of success but is the baseline that the team absolutely cannot miss. MVP is important to establish as a fallback in case the measurement of success cannot be reached due to unforeseen circumstances, such as technical or other difficulties.

Program Planning: Setting SMART KPIs

When a program moves into a detailed planning phase, the PMO defines a set of “SMART” KPIs to measure the exact program performance.

SMART goals cover the following questions:

  • Specific: What exactly does the program want to achieve?
  • Measurable: How will you identify whether the program has achieved your goal?
  • Attainable: Is the program goal realistically achievable?
  • Relevant: Does it align with where the program and stakeholders want to be?
  • Time-Bound: What are the key milestones and deadlines that need to be met?

Going back to our EDR program as the example, one of the SMART goals our TPM established was to “reduce the number of hosts with old EDR agent versions to below 1% by the end of May 2024.” Another SMART goal for the program was to “reduce the number of hosts with missing or incorrect EDR tags down to 1% by the end of February 2024.” These goals are SMART because they are specific about what the program wants to achieve, measurable through KPI numbers, realistically attainable for our teams, relevant to our program’s stakeholders, and time-bound by a clear deadline in mind.

Program Execution: Tracking and Reporting KPIs

At Adobe, we’ve developed automated dashboards to support KPI tracking efforts during the program execution phase. Automated dashboards help reduce the manual effort of data collection and provide timelier updates. For example, one dashboard can show the monthly count of created tickets versus resolved vulnerability finding tickets, which tells us both accurately and in real time whether a given remediation is effective. In any given month, we should be able to look at the dashboard and see whether the remediation speed is keeping up with the rate of newly created tickets. Since the ticket and vulnerability counts change hourly, automated dashboards can be a lifesaver for capturing accurate data.

The TPM then aggregates the program KPIs and dashboard outputs and shares them through regular program status reports that are sent to the program stakeholders for visibility and transparency. With these dashboards and status reports, program stakeholders can course correct more rapidly and make more impactful data-driven decisions.

Program Closing: Soliciting Stakeholder Feedback & Sign-Off

Finally, the stakeholders officially sign off on a program during the closing phase, validating its ultimate success. At Adobe, we developed a clear program sign-off process to evaluate the program and get final feedback:

  • Step 1: Identify and list all sign-off parties and their responsible areas
  • Step 2: TPM creates a central sign-off document and informs all stakeholders of the sign-off deadlines
  • Step 3: Each stakeholder enters a decision by the due date. If they are unable to provide sign-off, they must provide a specific reason and action required for them to sign off
  • Step 4: TPM coordinates closure of the required actions, then requests sign-off again
  • Step 5: TPM provides a final sign-off summary to all involved parties and program stakeholders as part of the program closure

The sign-off process eliminates any ambiguity about the impact or results of the program and brings attention to what more could be done or improved to reach the program’s goals, if anything.

Measuring Individual TPM Success

Measuring a program’s success is important, but measuring how well the assigned TPM delivers on the program is equally so. We want to know whether the individual TPM is successful in the eyes of the stakeholders, as this is a key factor to the program’s overall success.

To evaluate our TPMs, we send out regular program stakeholder surveys throughout the program cycle to get the stakeholder’s assessment of the program management in terms of scope, timeline, deliverables, risk, communication, and budget management. The survey includes some questions with predefined scales as well as open-ended questions asking for suggestions on improvement.

Below is an example of our survey using predefined scales:

Final Takeaways

To wrap up, here are the five (5) key considerations to help you measure your security program’s success:

  1. In the program initiation phase, clearly define your measurement of success and the MVP.
  2. During the program’s detailed planning phase, define a set of SMART KPIs that should be tracked.
  3. During the program execution phase, leverage automated dashboards to report on the progress of your KPIs, and embed them as a part of your regular program status communications to foster transparency.
  4. At the program closing phase, gather feedback from stakeholders and obtain final sign-offs to reduce any ambiguity.
  5. Survey your program stakeholders regularly to get a pulse check on the program’s progress.

Measuring security program success requires a delicate balance of art and science. By integrating these five key considerations and lessons learned, you can be more confident in enhancing your programs’ effectiveness year after year.

What’s on Your Mind? We Want to Hear from You!

Your opinion matters to us. Help shape the future of our blog by sharing your ideas and preferences. Click the link below to take a quick survey and tell us what you’d like to read about next.

> Take the Security@Adobe Tech Blog Survey

--

--