Bonus Content
Tooling Up for Incident Response
When a security incident hits, every second counts. How do you know your organization is ready to respond effectively? This panel brings together experts to explore the three pillars of incident response: people, processes, and tools.
About This Session
The panel features three industry leaders with unique perspectives on cybersecurity, IT leadership, and hands-on technical execution. Dr. Jessica Barker brings a globally recognized, people-centered approach to cybersecurity; Ricky Mayer contributes 25+ years of IT executive experience leading professional services teams through complex challenges; and Zak offers unparalleled technical depth, forged through years of real-world problem-solving.
A core message is that incident response is not just a technology challenge but a coordination challenge across humans, processes, and tools. The session encourages attendees to rethink traditional assumptions and adopt a holistic mindset that strengthens organizational resilience before, during, and after cyber incidents.
Panelists emphasize readiness for the unexpected—highlighting scenarios and blind spots organizations often overlook. From human behavior under stress to infrastructure realities during crisis, the group outlines practical strategies for elevating preparedness across teams and environments.
Key Takeaways
- Effective incident response requires a unified approach across people, processes, and technology.
- Panel experts offer diverse perspectives, from human psychology (Dr. Barker) to IT leadership (Ricky Mayer) to technical execution (Zak).
- Human readiness is critical, as real incidents often expose overlooked behaviors, blind spots, and stress reactions.
- Rethinking traditional response playbooks can reveal gaps, especially as threats evolve and teams face new challenges.
- Holistic preparation builds stronger organizational resilience, enabling faster, more coordinated incident response.
- The session provides actionable insights for modern cybersecurity teams seeking to improve readiness before a real attack occurs.
Incident Response & Recovery
Use insights from threat detection and other third-party tools to better respond and recover from security incidents.
Commvault Cyber Resilience Services
A comprehensive suite of solutions helping customers both tactically and strategically with their cyber resilience needs.
Shared Responsibility Model
Learn about the roles and obligations of Commvault (as the provider of DMaaS capabilities) and its customers.
Frequently Asked Questions
Why is the human element so important in incident response?
Human behavior often shapes the outcome of an incident more than tools alone. Understanding how people react under pressure and preparing teams for unexpected scenarios can significantly reduce risk.
Why do most people underestimate cyber risk, and how can organizations reframe messaging to drive meaningful action?
Many teams experience optimism bias—with over 80% underestimating the likelihood or impact of a cyberattack. This causes delayed decision-making and poor preparation. Organizations can counter this by framing cyber risk in clear, relatable business terms rather than fear-based messaging. Connecting risk to mission-critical operations, customer impact, and financial outcomes helps employees understand why action is necessary and motivates proactive behavior across the organization.
How does organizational culture influence incident response and overall resilience?
Culture is one of the biggest multipliers of resilience. Trust, open communication, and psychological safety enable teams to respond faster and more effectively during high-pressure incidents. When people feel safe raising concerns, sharing mistakes, and asking questions, IR teams avoid blind spots and minimize confusion. A healthy culture also supports cross-functional collaboration between security, infrastructure, backup, and application teams—reducing silos and improving response speed.
Why does rebuilding from a known-good state matter more than traditional “recovery” after a cyberattack?
Traditional recovery models assume systems are intact after an outage, but cyberattacks often corrupt configurations, credentials, and backups. Simply restoring systems risks reinfection and extended downtime. Rebuilding from a known-good baseline—combined with Cleanroom validation—ensures malicious artifacts are removed and identity, data, and applications are restored securely. This shift from recover to rebuild is central to modern IR strategies.
Why is hands-on practice essential for improving IR and resilience outcomes?
Tabletop exercises, rebuild drills, red and purple team engagements, and full-scale simulations help teams uncover hidden gaps and reduce response time. Practice builds muscle memory, improves cross-team communication, and exposes misconfigurations or process failures before an actual attack occurs. Organizations that train regularly—under realistic, chaotic conditions—recover significantly faster and with fewer errors.
Transcript
View Transcript
Please view video here for a time-stamped transcript
00:08 – 00:10
Welcome everybody, thank you again.
00:10 – 00:11
This is a great turnout.
00:11 – 00:14
First breakout session, post general session.
00:14 – 00:19
um You are here, in we’re the right place, for tooling up for incident response.
00:19 – 00:24
We’re going to tackle people, process, tools, slash technology.
00:24 – 00:26
And I’ll be your host, Chris Mierzwa.
00:26 – 00:31
I’m a Senior Director here at Commvault, responsible for our global resilience programs.
00:31 – 00:41
And of course, I’m a little biased because it’s our panel here, but this is an unprecedented six-hour no restroom break.
00:41 – 00:46
And I got to tell you, we said should we, should we not.
00:46 – 00:49
Maybe it’s against the rules, but we’ll try it.
00:49 – 00:49
No, just kidding.
00:49 – 00:53
One hour on your calendars, but it’s one of the few.
00:53 – 00:53
So thank you.
00:53 – 00:58
I know it’s a big commitment to be here for the 60 minutes, but I promise it’s going to be worth it.
01:00 – 01:08
We’ve assembled three incredible people, and I’m going to do a little intro for each of them because I can do it less humbly than they can.
01:08 – 01:14
But I want to make sure for the next hour you know the level of talent that’s here.
01:14 – 01:16
And I hope you feel the same as us.
01:16 – 01:27
We’re trying to tackle and interweave sort of three unique areas that at least I can think of that maybe haven’t been tackled when it comes to how should we think about and be ready
01:27 – 01:30
for incident response in new and unique ways.
01:30 – 01:32
And so that’s what we’re hoping to tackle.
01:33 – 01:36
Without further ado, let me start first to my left.
01:36 – 01:36
Dr.
01:36 – 01:38
Jessica Barker.
01:38 – 01:40
I’ve gotten a little chance to know you over the last couple of months.
01:40 – 01:44
um This is unique.
01:44 – 01:51
She comes to us both with a deep cybersecurity background and looks at it from a human angle.
01:51 – 01:59
So while I hope we all have unique things, I can tell you probably never heard what she’s done before, but a little more intro.
01:59 – 02:03
Two books, global circuit discussions.
02:03 – 02:15
She’s on the plane all the time, all around the world, talking to people, talking to companies like yours about how to prepare people for the things that you’re not usually
02:15 – 02:15
talking about.
02:15 – 02:18
How do they actually handle these things, right?
02:18 – 02:20
And we’re going to go deep and dive into that.
02:20 – 02:21
Thank you for being here.
02:21 – 02:22
Yeah, absolutely.
02:22 – 02:23
All right.
02:23 – 02:24
One more over.
02:24 – 02:26
Let’s get to Ricky Mayer.
02:26 – 02:38
uh Ricky, while you just joined Commvault, welcome aboard, right, a few months, comes with a 25-year-plus background in IT as a senior executive, and he is now our vice president of
02:38 – 02:42
global and professional services for Commvault.
02:42 – 02:43
But…
02:43 – 02:54
long career at IBM, Kyndral, VMware, and you know, when you have people doing professional services, you’re not tested when things go well.
02:54 – 02:56
You’re tested when there’s challenges.
02:56 – 03:02
And clearly you’ve been through a lot of those, and he’s gonna bring a lot of that magic to us today, so thank you for being here.
03:02 – 03:04
All right, the big guy.
03:04 – 03:05
By the way,
03:05 – 03:11
originally from the UK, hail now from US, Dallas, Fort Worth, right?
03:11 – 03:15
Zak came all the way from a little town called Cape Town, right?
03:15 – 03:18
Small flight, so.
03:18 – 03:19
A little village.
03:19 – 03:20
Yeah, a little village.
03:21 – 03:24
Zak and I get to work together a fair amount.
03:24 – 03:27
um We’re very lucky to have him.
03:27 – 03:34
His background, deep, technical, in the trenches, ah an absolute technical talent.
03:34 – 03:40
I’ve had the privilege over my career to work with a lot of people who have immense technical acuity.
03:41 – 03:54
Not trying to puff this up, but Zach clearly is at the top of his game, and he’s gonna bring today to us, you know, a real in the trenches view and how to tackle that from a
03:54 – 03:54
tools perspective.
03:54 – 03:57
So I hope I did that justice.
03:57 – 03:59
You’re gonna hear the magic here today.
03:59 – 04:03
So that said, I wanna start with you, Jessica.
04:03 – 04:11
So when we were prepping, you whipped out a term that we all need some education on, and that’s this
04:11 – 04:13
Optimism bias.
04:13 – 04:18
If you could clinic us on that that’ll give us a great kind of launch pad for for the session.
04:18 – 04:19
Sure.
04:19 – 04:31
So one thing I find really helpful in cyber security because I’m always looking at the people side I think we can learn so much from neuroscience from psychology from sociology
04:31 – 04:41
from these fields that understand human behavior and ways of thinking what makes us take and what makes us do the things we do Among
04:41 – 04:48
concept and long pieces of research from neuroscience is around this idea of the optimism bias.
04:48 – 04:55
Research showing that 80 % of people around the world, we are wired towards optimism.
04:55 – 05:04
That is regardless of where we’re from, it’s regardless of gender, of age, of ethnicity, of socioeconomic background.
05:06 – 05:10
And we look at the world, the wide world and think things are getting worse.
05:10 – 05:17
But from an individual, a family, a team perspective, we have this optimism bias.
05:17 – 05:21
Essentially thinking the bad thing won’t happen to us.
05:21 – 05:26
And the research, there’s been so much research done on this, a lot led out of London by Dr.
05:26 – 05:29
Tali Sharot, so if you want to go and have a look at it, I highly recommend.
05:30 – 05:35
And this research is generally looking at things like health, things like divorce,
05:36 – 05:38
don’t think the bad thing will happen to them.
05:38 – 05:42
I think this has such applicability to cyber security.
05:42 – 05:54
When I first started to learn about it, I thought this is why when we go to a board with all the statistics of the fact that an incident is likely to happen at some point, why
05:54 – 05:55
they may be shrug it off.
05:55 – 05:58
Because they think that happens to everybody else.
05:58 – 05:59
But that’s not going to happen to us.
05:59 – 06:06
And I think this has such a impact in all sorts of elements of cybersecurity
06:06 – 06:08
not least incident response.
06:08 – 06:15
And I’m guessing that that bias really exacerbates itself when things, bad things do happen.
06:15 – 06:15
Yeah.
06:15 – 06:19
Because they’re like this goes against everything I could have thought.
06:19 – 06:20
Yeah, that’s it.
06:20 – 06:30
It undermines that kind of muscle memory, that resilience, that individual and team and even organizational resilience because we’re thinking well we didn’t think that this would
06:30 – 06:30
happen.
06:30 – 06:35
So there’s a level of shock and you know tools, processes and the
06:35 – 06:41
human element of incident response just aren’t in place because it was never expected.
06:41 – 06:42
I see.
06:42 – 06:43
And just curious one thing.
06:43 – 06:48
So we probably all know people in our lives who are glass half full, glass half empty.
06:48 – 06:54
To use your statistic, 80 % of people are, with respect to this, are glass half full.
06:54 – 06:55
Yes.
06:55 – 06:57
% full, to use the analogy.
06:57 – 07:05
But you meet people and there are people that are less than half full, but you’re saying internally when they think about it that that’s
07:05 – 07:07
where they’re still at this 80.
07:07 – 07:15
And also that difference between a cynicism maybe around the world and thinking about how things are gonna pan out for you.
07:15 – 07:17
m
07:17 – 07:21
And internally, know your expectations and optimism is a good thing.
07:21 – 07:25
I’m not saying we should try and design out optimism.
07:25 – 07:31
I’m an optimistic person myself and I do often recognize optimism bias at play with myself.
07:31 – 07:32
Optimism is
07:32 – 07:41
great. We wouldn’t have got to where we are as human beings if we weren’t optimistic and also, you know, we’d struggle to get through the day quite frankly if we didn’t have some
07:41 – 07:46
optimism. So instead of trying to um design it
07:47 – 07:51
One thing is to recognize it and another thing is to work with it when we can.
07:52 – 08:02
So rather than, you know, the saying of like, it’s not if but when and trying to scare people, we can say, you know what, an incident is likely to happen, but we can be
08:02 – 08:03
prepared.
08:03 – 08:05
There’s things we can do to have uh
08:05 – 08:12
tools in place, to have processes in place, and the more we prepare, the more resilient we will be.
08:12 – 08:17
So taking an optimistic framing of something that is challenging.
08:18 – 08:26
And in punching through for the folks who are in that 20 % category, how difficult is that?
08:26 – 08:31
Or how more or less are they prepared because they’re biased the other way?
08:31 – 08:35
Being biased against optimism can then be challenging in itself.
08:35 – 08:41
Because you can think, well, the bad thing’s always going to happen, so why should I bother?
08:41 – 08:48
So that kind of messaging around being prepared, funnily enough, it kind of works for both sides, helpfully enough.
08:48 – 08:53
Yeah, Ricky, I’m sure large projects when they go wrong, right?
08:53 – 08:55
Where have you seen this play out?
08:55 – 09:09
So look during crisis we often don’t rise to the level of our IR plans or any plans We often fall to the levels of our execution practice, right rehearsed behaviors often beats
09:09 – 09:18
great intentions and Incident response or as I call the incident resiliency is just that it’s a it’s a goal
09:19 – 09:25
but without practice and rehearsed behaviors in that mental model and that mental muscle memory.
09:27 – 09:35
And if your first disaster is where you have your first practice, the outcomes is not going to be resiliency it’ll be bad press.
09:36 – 09:50
So to pack on to Jessica’s point about optimism bias, we often tend to forget that we as humans have to then look at the processes and then the tools at our disposal, bring them
09:50 – 09:54
all together in this highly complex world that we are in today.
09:55 – 10:01
And we heard from our keynote speakers this morning that the technology, the landscape, everything around is becoming more more complex.
10:02 – 10:06
And in that diverse and complex environment,
10:06 – 10:17
having one, an optimism bias, and two, this false notion of security that we have plans but we haven’t really rehearsed them or practiced them is just setting us up for more
10:17 – 10:19
failures and success.
10:20 – 10:30
That reminds me, I’m sure if we did a raise of hands or we went and chatted with you all, a lot of folks are like, hey, look, I could be as optimistic as I want and I want to be
10:30 – 10:34
ready, but we don’t have the budget, we don’t have the time, we don’t have the people.
10:34 – 10:36
So I’m going to guess when those
10:36 – 10:41
forces push against the optimism bias, that’s also a force to be reckoned with, right?
10:41 – 10:47
Like, I want to do this, but the institution that I work for can’t afford it, can’t get it done.
10:47 – 10:52
So maybe we could explore that a little as we go on as well, because you probably see that, Zak right?
10:52 – 10:58
mean, when rough things happen, I mean, what happens with these folks?
10:58 – 11:04
How do they react in the middle of the situation, in the pressure cooker, if you will?
11:05 – 11:08
Because you’ve to understand everyone’s still human.
11:08 – 11:11
It doesn’t matter how many times you’ve practiced something.
11:11 – 11:19
Although the exercise and the practicing does help a lot, that’s essentially how you get your resilience.
11:19 – 11:23
But people go hide when incidents happen.
11:23 – 11:24
They go hide.
11:24 – 11:27
They try and clean their tracks
11:28 – 11:33
and see if it wasn’t any of their mistakes and then start blaming and so on and so forth.
11:33 – 11:40
And that basically comes from the initial standpoint where it wasn’t tested enough.
11:41 – 11:46
A cyber resilience or a cyber recovery plan, you’ve got the plan, but it wasn’t tested enough.
11:46 – 11:56
And as soon as you draw that in, so it becomes a muscle memory and everyone knows precisely what they’re going to do next on cue,
11:56 – 12:00
then you don’t have that kind of behavior anymore.
12:00 – 12:06
Which is also interesting because I heard your statistics and I wish I brought some of my own.
12:07 – 12:10
But how does insurance then work into that?
12:10 – 12:17
ah we like, Like you said, we don’t want to, we don’t want to pay for insurance, right?
12:17 – 12:20
We are at that kind of expense.
12:20 – 12:23
But how does that work into the optimism bias?
12:23 – 12:25
Because uh
12:25 – 12:28
Because we still pay for car insurance, health insurance.
12:29 – 12:30
Interesting point on that.
12:30 – 12:36
If we think about health insurance or if we think about life insurance, how are they framed?
12:36 – 12:42
It’s not framed as illness insurance or death insurance.
12:42 – 12:49
So again, we can learn something from those industries and the fact that that’s not happened by accident.
12:49 – 12:52
There is framing at work there.
12:52 – 12:54
To help appeal to people.
12:54 – 12:56
Rather than taking care of your family.
12:56 – 12:56
Exactly.
12:56 – 12:57
Painting it forward.
12:57 – 12:58
Right.
12:58 – 12:58
Yes.
12:58 – 12:59
You can think about the adverts that you see.
12:59 – 13:02
can think about the framing around all of that.
13:02 – 13:07
It’s all a very optimistic and positive framing of things that essentially are negative.
13:07 – 13:08
Yeah.
13:08 – 13:16
I just want to take a quick pause and tell you that the AC turned up here because we were losing pounds so fast up here.
13:16 – 13:17
can speak for myself.
13:17 – 13:20
was Incredible, what a miracle just occurred.
13:20 – 13:21
Ok
13:23 – 13:28
Can you imagine State Farm get yourself some death insurance?
13:28 – 13:29
Like a good neighbor.
13:29 – 13:36
God, it just rolls off the tongue.
13:36 – 13:39
But in cybersecurity, we often frame stuff the other way.
13:39 – 13:39
That’s true.
13:39 – 13:40
Yeah.
13:40 – 13:44
We’re not taking the lead from a well-established.
13:44 – 13:44
Yeah.
13:44 – 13:46
And this isn’t my secret.
13:46 – 13:52
A lot of the things that I bring to cybersecurity, I’m not inventing or reinventing the wheel.
13:53 – 14:04
I’m taking known knowledge, you know, research that’s out there that anybody can draw upon just in fields that are outside of cyber security and bringing it in here.
14:04 – 14:09
And there’s so much we could advance if we look more at those fields.
14:09 – 14:14
And I think we will, but if neuroscience, psychology, sociology, marketing, so much we can learn.
14:14 – 14:23
So to that end, not asking you to give away the farm, because I know you get a pretty penny, but what if there are a couple
14:23 – 14:34
of takeaways, if you were to cliff notes when you do an engagement, what are a couple things that could be top of mind that you want your folks to take away when you do one of
14:34 – 14:35
these?
14:35 – 14:46
Yeah, it varies, but in terms of this particular conversation, I think one thing is in cyber security we often rely on the fear, don’t we, to try and spread our messaging.
14:46 – 14:50
And I understand that people think we can scare.
14:51 – 14:53
someone into behavior.
14:53 – 14:55
It doesn’t really work like that.
14:55 – 14:58
Using fear to try and change behavior is very complicated.
14:58 – 15:01
And we
15:01 – 15:04
use it as a kind of a hammer.
15:04 – 15:09
What we can think about instead is how we can frame our messaging.
15:09 – 15:12
We can think about working with human bias.
15:12 – 15:14
Because the bias is there.
15:14 – 15:16
So how can we frame our messaging?
15:16 – 15:28
For example, if we want a board or a team to practice incident response, then rather than trying to scare them into thinking that’s important, can be, know, realistic presentation
15:28 – 15:29
of the threat.
15:29 – 15:31
But if we practice, that makes us more
15:31 – 15:33
resilient.
15:33 – 15:35
That resilience comes from repetition.
15:35 – 15:37
So thinking of that more optimistic framing.
15:37 – 15:42
Okay and we talked a little bit about this I remember when we were in Las Vegas.
15:43 – 15:51
You do something so unique and if someone’s saying, wow I’ve got to get Jessica or somebody.
15:51 – 15:53
How do you sell this internally?
15:53 – 15:55
This is such a unique concept.
15:55 – 16:02
It’s not part of, probably everybody sitting here going, wow, I had no idea somebody did this, right?
16:02 – 16:12
When you see successful ways to sell your services, and get punched through to the exec team, what are a few routes to make that happen?
16:12 – 16:16
Yeah, so I’ve been doing this for 15 years.
16:16 – 16:20
In the early days, I certainly wrestled a lot more with that.
16:21 – 16:31
And I think one thing is that we’ve seen this growing recognition in the cyber security industry that people are at the heart of cyber security just as much as tools or
16:31 – 16:33
processes.
16:33 – 16:39
But essentially it often takes a level of cultural maturity in an organization.
16:39 – 16:50
And unfortunately it often takes that experience of an incident either directly or an organization seeing one of their peer organizations because that breaks through the
16:50 – 16:51
optimism bias.
16:53 – 16:55
I guess you could look at a double-edged sword.
16:55 – 17:03
It’s both unfortunate that that’s what it takes, but it’s also if that has to be the catalyst, and it’s not you and maybe a peer, go forward.
17:03 – 17:04
right.
17:04 – 17:05
Fantastic.
17:05 – 17:06
Thank you.
17:08 – 17:13
I did not ask Jessica, hey, give me a full rundown on that because it’s fascinating.
17:13 – 17:17
I think for all of us coming from a technical area, it’s so good.
17:17 – 17:19
So yeah, absolutely.
17:20 – 17:21
All right.
17:21 – 17:27
So Ricky, when you and I spoke, you were talking about minimum viable.
17:27 – 17:35
Interesting, we heard a lot in the keynote this morning, but we didn’t eliminate that term on purpose, but it’s kind of buried under sort of the next level.
17:35 – 17:38
So you want to talk a little bit about that?
17:38 – 17:38
Sure.
17:38 – 17:42
So to the point of practice makes perfect, right?
17:42 – 17:44
But how do you practice and what do you practice on?
17:44 – 17:57
So if you look at a large ecosystem, you have such a huge disparate system of technologies at your display that’s running your business, how do you know where to start practicing
17:57 – 17:59
and how often do you practice, right?
17:59 – 18:08
So at Commvault we talked about the notion of minimal viable and what that really is is that leveraging tools like a BIA assessment that we’ve been doing for many, years.
18:08 – 18:09
years.
18:09 – 18:11
You start to define what’s mission critical for your business.
18:11 – 18:16
So active directory could be mission critical, a patient care system could be mission critical.
18:16 – 18:26
And you define and say, you know what, that’s what I’m going to start with first because if there is a compromise and I’m shut down, this is what I want to bring up first.
18:26 – 18:35
And then going to my, you know, from mission critical to business critical system, that could be your emails, your comms, your other applications and workloads.
18:35 – 18:38
And then you define the supporting applications
18:38 – 18:45
once they’re up, together with mission critical, business critical, and supporting apps, now you have your entire environment operational.
18:45 – 18:50
Just remember, we often talk about incident response, and we think security.
18:50 – 18:58
Incident response is required as a process or as a institutional knowledge, not just for security incidents.
18:58 – 19:01
You could have a bad software police that could take you down.
19:01 – 19:08
You could have an AI, agentic AI software that may do something, may call
19:08 – 19:10
and shut down your environment.
19:10 – 19:13
To me, incident response is beyond just security.
19:13 – 19:17
It really helps you define your enterprise operational capability, right?
19:17 – 19:27
And this is why the notion of mission protocol or minimal viable comes in to help you really define your applications and your workloads and then start to build testing and
19:27 – 19:28
procedures around it.
19:28 – 19:37
The other thing I want to sort of highlight on quickly is that if you look at the NIST framework for incident response, the special publication 861,
19:38 – 19:51
often breaks down incident response into four elegant phases, so prepare, detect, and then it gets into containment, eradicate, recovery, and the last is reassess, redo your
19:51 – 19:52
baseline.
19:52 – 19:58
Well, the problem with this whole notion of recover, as we heard earlier today, is that it creates blind spots.
19:58 – 19:59
Because what are you really recovering?
19:59 – 20:01
How well do you trust what you’re recovering?
20:01 – 20:04
Is that data fully trustworthy?
20:04 – 20:07
Does it have the integrity checks in it?
20:07 – 20:13
And I think this is why NIST needs to reassess what they’re saying as recover and maybe think rebuild.
20:13 – 20:19
Because, you know, recover to me essentially seems like you have a wall that’s damaged and you’re patching the wall.
20:19 – 20:22
Well, it’s not that elegant, You’re patching the wall.
20:22 – 20:25
Rebuild is essentially rebuilding that wall from a known blueprint.
20:26 – 20:37
And this is why, you you look at our solutions like Cloud Rewind and others, we actually help you rebuild the entire application stack, right, with known data, with good data.
20:37 – 20:42
And now you have a level of trust in your recovery through the biblical process.
20:42 – 20:50
So you know end up you know just by saying don’t just think about incident response as one security only.
20:50 – 20:53
It’s anything that can take your business offline.
20:53 – 20:57
You cannot bring back everything on the first minute, right?
20:57 – 21:05
The goal of incident response is to basically detect in seconds mitigated minutes and stay resilient forever.
21:05 – 21:09
And for that you need to have definition of what’s minimal viable for your company.
21:09 – 21:12
So in that…
21:12 – 21:14
just want to do a raise of hands here.
21:14 – 21:17
Let’s take the Wayback Machine, just pick 10 years.
21:17 – 21:23
And we were walking around and telling everybody, hey, we’ve got this incredible service, not us, Commvault, but the industry.
21:23 – 21:25
We’re going to do application mapping.
21:25 – 21:32
Because before you can even move up stack to a larger discussion of that, you have to first understand the level of interconnectivity.
21:33 – 21:34
And I’m
21:34 – 21:38
to say I’m a glass half full guy, just making sure Jessica knows.
21:39 – 21:45
But we tried so hard as an industry for that and we failed with so many enterprises.
21:45 – 21:53
If you ask, in fact I won’t even ask because people are going to want to raise their hands, hey do you have a full application map?
21:53 – 21:56
I would bet we do not get a lot of hands.
21:56 – 22:04
And so that’s okay maybe because now we’ve raised it up that we can recover at a different layer of abstraction.
22:04 – 22:09
But I’m just curious to your take on that because you probably went through that, right, in that first round.
22:09 – 22:10
Does this help?
22:11 – 22:17
You still have to define minimal viable, but does it help that you’ve abstracted out that more detailed application mapping?
22:17 – 22:20
Not just application mapping, even data mapping.
22:20 – 22:23
your applications are essentially consumers of the data.
22:23 – 22:31
And at the end day, a ransomware malware or any adversary that’s trying to compromise you is not compromising you because he loves your application.
22:31 – 22:34
Those are the efforts of Power Tools, which is the data, right?
22:34 – 22:39
So no way your data decides, and our attack services are just being out of control.
22:39 – 22:43
And you you’re asking that if we ever knew where our application is.
22:44 – 22:51
Maybe in the 70s and the 80s when you had client server applications or did a terminal sent into a mainframe.
22:51 – 22:55
Today you are building one of the, you you’re going from building
22:57 – 22:59
you’re launching VMs.
22:59 – 23:06
You have the whole dev sec ops process that’s launching applications on the go and data resides everywhere.
23:06 – 23:15
And I think this is why having that end to end visibility, who has the data, who’s accessing it, where does the controls lie.
23:15 – 23:19
And those basics of eating your security vegetables has come a long way.
23:19 – 23:22
You still have to make sure that you’re eating your security vegetables.
23:22 – 23:26
Access control from identity management,
23:26 – 23:32
showed that you have zero trust in all those lateral movements between the data points are secure.
23:32 – 23:43
Because your incident response capability, or as I call it, incident resiliency, which is made up of response, recover, and be built into the incident resiliency, relies on how
23:43 – 23:44
willing you are to the environment.
23:45 – 23:48
Ok So, Zach, I have to ask you a question.
23:49 – 23:51
Everybody’s overloaded, right?
23:51 – 23:58
Everybody you work with, and when you come in, you’re supplementing with your incredible skill set, skill sets they don’t have.
23:58 – 24:00
Do they, do folks not have time to do this?
24:00 – 24:10
I mean, these are important things, Ricky’s pointing out, but when you actually hit the ground level, what are, why can’t we get some of this done?
24:10 – 24:13
What’s holding folks up from defining this?
24:13 – 24:16
Well, because there’s actually a bunch of things.
24:16 – 24:20
I’d actually say firstly, culture.
24:21 – 24:32
You’ve got these departments that you, especially when you’ve got like older companies, and I want to point out to the application mapping, if your application is two months old,
24:32 – 24:34
you precisely know what it’s about.
24:34 – 24:42
Once it gets to two years and it’s like feelers all over the place, then you have no idea what’s going on.
24:42 – 24:47
And just like that, with the older company, you get these cultures.
24:48 – 24:53
Security guy, backup guy, endpoint guy, and they don’t mix.
24:53 – 24:57
You mix with your own guys, right?
24:57 – 25:05
I told someone yesterday that I have 30 year IT experience and 17 of those were dedicated to security.
25:06 – 25:18
Not one day of that 17 years I’ve ever met my backup guys, which is insane because in the end, if you think about it, what am I doing as a security guy?
25:18 – 25:23
I’m not securing the people or their cars.
25:23 – 25:27
I’m also securing the data just like the backup guys.
25:29 – 25:38
I think in companies and cultures we need to change the way we think of each other’s roles and what we are actually doing there.
25:38 – 25:49
I’m there to put a firewall, IPS, SIM, SOC, all those nice things and do the forensics and pain testing and architecture.
25:49 – 25:54
The reason for that all boils down to what everyone is doing there.
25:54 – 26:04
And it’s building that data, building the applications, and serving the service to, or outward to the common folk,
26:05 – 26:08
Jessica, I could feel you on the edge of your seat.
26:08 – 26:10
How do you bust through that?
26:10 – 26:17
I mean again, this has to be sold at a very different level to change that cultural piece.
26:17 – 26:19
I love that Sark brought up culture.
26:20 – 26:28
To Ricky’s point as well about when an incident happens or preparing for an incident is about so much more than security.
26:28 – 26:35
And yeah of course, we’re talking about infrastructure, we’re talking about backups, but we’re also thinking about the whole organization.
26:37 – 26:40
We’re thinking about the culture and we’re thinking about how everybody can be impacted.
26:40 – 26:46
So as Ricky was talking, I was thinking about what’s been happening in the UK this year.
26:46 – 26:56
So I’m from the UK, I’m now based in Las Vegas, but I’ve spent a lot of time back in the UK this year, and one of my favorite places to go is Marks and Spencer’s.
26:57 – 27:03
Marks and Spencer’s are very big, as everybody knows, very big, very long.
27:03 – 27:14
incident this year and I was there on day one I was trying to buy some groceries on my card wouldn’t work and I’m thinking no what’s going on here and it turned out it was day
27:14 – 27:20
one of a cyber attack that took their website down I’ve lost track of how long but we’re talking about months.
27:20 – 27:23
Supply chain, nothing on the shelves.
27:23 – 27:31
I was going into Marks and Spencer’s and the shelves were empty or there would be one product covering like a whole shelf.
27:31 – 27:35
So there would be like hundred bottles of Coca-Cola rather than the usual variety.
27:35 – 27:39
So I’m chatting to everybody in there on the shop floor.
27:39 – 27:42
I’m going into the cafe and I’m chatting to the people working there.
27:42 – 27:46
Not telling them I work in cyber security, but I’m just like, oh
27:46 – 27:47
you’ve got no avocados.
27:47 – 27:50
Is this something to do with the cyber incident?
27:50 – 27:53
And I found it so interesting how they responded.
27:54 – 28:00
Firstly, and this happened with everybody I spoke to, firstly, they acknowledged how challenging it was for them.
28:00 – 28:02
And they said, we’re coming in every day.
28:02 – 28:04
We don’t know what we’re coming into.
28:04 – 28:14
We don’t know if we’re coming into no stock to give customers or to provide to customers, or if we’re coming into it being like the week before Christmas and we are overloaded with
28:14 – 28:15
stock.
28:15 – 28:21
We’re having customers ask if we’ve got XYZ and we can’t check on the systems, we don’t know.
28:21 – 28:23
So they would acknowledge how bad it was.
28:23 – 28:29
And then they would say, but we’re so lucky because we all work so well as a team.
28:29 – 28:31
We’re all supporting each other.
28:31 – 28:34
We’re hearing from head office all the time.
28:34 – 28:41
And I heard on multiple separate occasions how much respect they had for the security team.
28:41 – 28:45
They were saying things like, the tech team is working so hard to get things right.
28:45 – 28:47
And they’re working around the clock.
28:47 – 28:52
I hear they’re sleeping in the office, they’re getting the pizza delivered.
28:52 – 28:54
And this, by the way, this was nowhere near the head office.
28:54 – 28:57
This is up in the northeast of England compared to London.
28:57 – 28:59
And I thought there’s so much we can learn from that.
28:59 – 29:02
And I kept reflecting on Wang.
29:02 – 29:03
And I thought, well, why do I
29:03 – 29:04
love going to Marks and Spencer’s?
29:04 – 29:10
It’s like, it’s got a brand of trust and it’s got a personality.
29:10 – 29:17
And I can only take from this that that is internal as well as projected externally.
29:17 – 29:20
So the communications is something we can learn from.
29:20 – 29:27
The respect that was coming from the top of the organization all the way through is then mirrored back.
29:27 – 29:30
And this isn’t built in an incident.
29:30 – 29:33
This is built in a company culture over years
29:33 – 29:36
when things are good and then it’s tested when things go wrong.
29:36 – 29:38
That’s interesting.
29:38 – 29:44
Another factor that makes this whole thing very complicated is today we are living in the world of shared responsibility, right?
29:44 – 29:54
So today enterprises are not just struggling with the silos that they have internal to their IT legal support application database team.
29:54 – 29:59
but you’re also now running an environment where the workloads are cost-
30:05 – 30:09
Then you also have SIs who are helping you do custom integration and stuff like that.
30:09 – 30:12
So where does the responsibility really start?
30:12 – 30:23
Because you have to look at the shared responsibility model that we have put ourselves in intentionally where we have security of the infrastructure, security of the clouds, of the
30:23 – 30:25
applications, of the on-prem.
30:25 – 30:35
And in the shared responsibility matrix, even though everybody understands the need for building the hygiene culture, it’s the competing problem.
30:35 – 30:41
And that prevents people from truly building these IR plans and testing them out.
30:41 – 30:47
And I think companies who are going to win this battle are the ones who are intentional about creating the shared audience.
30:47 – 30:54
So you know what, it’s important for all of our partners to work together with us and build these plans and then execute them.
30:54 – 30:59
Because at the time of crisis, you’ll be relying on the shared responsibility matrix.
30:59 – 31:03
And every player who’s involved in it to help you get back.
31:04 – 31:09
Yeah, an interesting point and Zach when you said hey, I didn’t speak with them.
31:09 – 31:11
It wasn’t that you disliked them.
31:11 – 31:14
It wasn’t that you had any negative bias or anything toward them.
31:14 – 31:16
You’re just like, I’ve got a lot of work to do.
31:16 – 31:19
This is what I do.
31:19 – 31:25
And it has to transcend such a larger vertical right within the organization.
31:25 – 31:26
guys, Us security guys.
31:26 – 31:30
We do think a lot about ourselves, right?
31:30 – 31:30
Yes.
31:33 – 31:33
the line.
31:33 – 31:35
Top of the line.
31:35 – 31:35
Right.
31:35 – 31:39
And I also wonder if anything was built in to encourage you to talk.
31:39 – 31:40
Not at all.
31:40 – 31:42
Or to learn together.
31:42 – 31:47
We can say, we can look at that as being like, we didn’t do that.
31:47 – 31:49
But did the organization facilitate it?
31:49 – 31:53
Was that something that was designed in to help encourage.
31:53 – 31:55
Yeah, it’s, no way.
31:55 – 32:01
And that is why I tell people you’ve got to
32:01 – 32:07
take your key guys throughout the whole infrastructure and let them have a beer every week.
32:07 – 32:10
They gotta talk to each other.
32:10 – 32:19
Even if it’s not collaboration or anything, they gotta know the name and the number and what they do in the organization in order to get that whole thing smooth.
32:19 – 32:30
Because you’re segregating, without your knowing, you’re segregating your whole organization from inwards out and there’s no way of, there’s no…
32:30 – 32:34
links of actual open communication between them.
32:34 – 32:45
I had one organization, I remember in lockdown, they were really struggling with how do we get these teams to interact with each other, especially now when they weren’t in the
32:45 – 32:48
office, you know, and they did Lego building.
32:48 – 32:59
They sent out a little Lego pack to everybody individually, got them together on a Zoom or Teams or whatever, and they all built Lego at home, separate from each other, but
32:59 – 33:00
together.
33:00 – 33:01
And it worked.
33:01 – 33:04
Simple, reasonable price, build some teamwork.
33:04 – 33:05
How ironic.
33:05 – 33:06
We’re in the keynote.
33:06 – 33:09
There’s agents doing things automatically.
33:09 – 33:11
There’s going to be people on Mars.
33:11 – 33:16
And we’re literally just talking back about Legos and pizza and making time.
33:16 – 33:18
But it’s real, right?
33:18 – 33:24
We think so higher level, but there’s just some base stuff that we’ve got to get back to.
33:24 – 33:28
So, Zach, on that note, um we heard about res ops.
33:28 – 33:30
It’s trending
33:30 – 33:32
deep in Twitter or X.
33:32 – 33:34
Sorry, my God, I used the legacy term.
33:34 – 33:35
So.
33:35 – 33:37
Talk to us about that.
33:37 – 33:44
I mean, I know it’s something we’re trending, trying to create, but there’s something real under there, right?
33:44 – 33:54
Because the tools, the interconnection, give it whatever name we want, it still has real pieces underneath that are making res ops a reality.
33:54 – 34:03
So maybe from a tools perspective and how we’re looking at this resilience operations, give us a feel for what does it really mean at the ground level?
34:06 – 34:09
So Res Ops is…
34:10 – 34:17
Well, you got the tools and the people and the processes, but if you don’t test it, I’m just going to come back to that testing thing, Chris.
34:17 – 34:24
You’ve to keep If you don’t test it, you’ve got all these tools which you subscribe to for years and years.
34:24 – 34:30
It’s like my um membership at uh a golf range where…
34:30 – 34:31
uh
34:31 – 34:38
I went there twice in two years and it ends up spending thousands for those two times.
34:38 – 34:47
So you’ve got all these tools that you pay for and you subscribe to and you buy servers for and they’re running there in the background.
34:47 – 34:50
um
34:50 – 35:00
If you don’t know how to use them and you get to an incident and they actually just gonna slow you down in recovering.
35:00 – 35:08
Frankly those tools are also something that you need to restore once it hits the fan, right?
35:08 – 35:16
um So the tools aren’t anything that you, frankly I’d say to companies don’t even worry about the tools at the moment because
35:17 – 35:24
Because if you don’t, if your people don’t know how to act, like if you’re in incident right now, what are you going to do?
35:24 – 35:25
How do you act?
35:25 – 35:27
What’s the next thing that you’re going to do?
35:27 – 35:30
Now, I understand the optimism thing.
35:30 – 35:40
It’s just for us to kind of get that mindset into a customer, you can’t say it’s all just…
35:40 – 35:41
uh
35:42 – 35:44
roses and cream, right?
35:44 – 35:46
You gotta say what’s gonna happen.
35:46 – 35:54
Otherwise, if you keep on being optimistic, they’re just gonna say, then it doesn’t sound like we need any of this, do we?
35:54 – 35:56
Yeah, and I agree.
35:56 – 35:59
It’s not about denying that the bad thing’s gonna happen.
35:59 – 36:06
It’s about how you talk about something negative in a way that still can take people with you.
36:06 – 36:08
same When I’m raising awareness of threats.
36:08 – 36:10
You know, I go and I do awareness raising.
36:10 – 36:12
I make awareness raising material.
36:13 – 36:13
and
36:13 – 36:16
And obviously I’m talking about the bad stuff.
36:16 – 36:20
I can’t raise awareness of cybersecurity and not talk about the threat.
36:20 – 36:27
But it’s doing that in a way that’s proportionate so people don’t think you are exaggerating for your own benefit.
36:27 – 36:37
And then thinking about how you can enable and empower people, even when you’re talking about something negative, so they feel that they can still engage with it.
36:37 – 36:39
And as soon as you…
36:39 – 36:41
well obviously, you’re not going to wait for the incident.
36:41 – 36:43
But as soon as you do like an
36:43 – 36:55
exercise a virtual incident exercise like we’ve got in recovery range then you’re gonna understand you’re gonna see all these 10 tools 20 to 100 tools and you’re say well we
36:55 – 37:03
haven’t even used any of them in recovering our environment right um if you get to that point
37:04 – 37:16
Then you can obviously shed all of those dead skin and eventually get to a point where you’re a lean, MVC type of guy that can easily recover.
37:16 – 37:21
Well, you know what you want to do and you’ve trained your people well enough.
37:21 – 37:27
you can’t, so you can’t go through those exercises without the communication between those teams.
37:27 – 37:28
And that is one term.
37:28 – 37:28
So.
37:29 – 37:33
If you don’t want to send them for beer, then at least get them in the room.
37:33 – 37:42
And start with those exercises where you go through the process of things failing in your environment.
37:42 – 37:44
Then ask, what are you going to do?
37:44 – 37:45
What are you going to do?
37:45 – 37:49
And that’s also another finger pointing exercise.
37:49 – 37:51
It’s like we need to tabletop, right?
37:52 – 37:56
You’ve got to get people out of their comfort zone appropriately.
37:56 – 37:59
They’ve to realize what tabletops are in the park.
37:59 – 38:03
mean, tabletop is just one thing that you also need to use.
38:04 – 38:05
You have tabletop exercises.
38:05 – 38:07
You have re-move drills.
38:07 – 38:10
also have the red key, purple key exercises, right?
38:10 – 38:12
And they each have a place in an organization.
38:12 – 38:20
So let’s say an organization is a majority number, they are established, what their minimal viable company needs to look like, they have all their processes in place, they
38:20 – 38:27
have the shared responsibility fitted up, and they have also understood that, you know, yes, we gotta look beyond the human body.
38:27 – 38:30
How do you start doing it?
38:30 – 38:31
What helps you prepare?
38:31 – 38:37
So tabletop exercises are essentially low stress exercises.
38:37 – 38:38
They’re not training modules.
38:38 – 38:48
They’re actual half day full day sessions where you’re supposed to come together with the shared responsibility to set people to practice one of the most scenarios that your
38:48 – 38:51
company is most likely to experience.
38:51 – 38:53
And it helps you build clarity.
38:55 – 38:57
Red teams are a totally different nature.
38:57 – 38:58
They’re life fighters.
38:58 – 39:02
And red teams essentially help you build a community.
39:02 – 39:07
So there’s a real distinction between what’s building clarity, what’s building humility.
39:07 – 39:08
I like that.
39:08 – 39:11
And then rebuild exercises are essentially a source of truth.
39:11 – 39:16
It’s a reflection where you think about your cyber resiliency maturity assessments.
39:16 – 39:17
And you say, know what?
39:17 – 39:20
We have all these plans that we have built based on our assessment.
39:20 – 39:23
How well does our plans fall in water?
39:23 – 39:27
That’s the truth that the Weibulls exercise is different at all.
39:28 – 39:34
All of these have to come together to really give you a real sense of how prepared you are.
39:36 – 39:38
So you’re saying it’s easy.
39:38 – 39:40
Let me write that down.
39:40 – 39:42
In a way it is.
39:42 – 39:43
Please, please.
39:43 – 39:44
I’m cut you off.
39:44 – 39:45
No,
39:51 – 39:52
First thing
40:03 – 40:09
The other two times a month that we have a meeting, we’re talking about business.
40:09 – 40:13
What could happen, what does happen, who’s in charge, who to get in touch with.
40:13 – 40:19
And I’ve built up communication between all of them so that each department saw it like the exact same.
40:19 – 40:25
Each person knew, you know, that he’s not such a bad person even though he works over there or she works over there.
40:25 – 40:28
Now I know we’re a family, we’re a company.
40:28 – 40:29
We each have a stake in.
40:29 – 40:31
If something goes wrong…
40:31 – 40:33
We know how to fix it.
40:33 – 40:34
We trust each other.
40:34 – 40:39
Whereas I have worked at companies where the person will say, that’s your problem.
40:39 – 40:40
That’s not ours.
40:40 – 40:45
And then the company starts to hit the wall All the profits go down.
40:45 – 40:46
The money goes down.
40:46 – 40:52
then If it’s cyber hack, the insurance company will come and say, you know, you’re not following the procedure.
40:52 – 40:53
We’re going to cut you off.
40:54 – 41:00
The way I built it up is so we don’t go off a beer because I’m not going to promote drinking.
41:03 – 41:04
We’re going to lunch.
41:04 – 41:05
Don’t talk business.
41:05 – 41:07
Just learn about each other.
41:07 – 41:09
Keep it at the human level, right?
41:09 – 41:09
Right.
41:09 – 41:10
Keep it at the human level.
41:10 – 41:13
and when you sit down, I tell them…
41:15 – 41:15
service.
41:21 – 41:23
So everybody is somebody different.
41:23 – 41:25
Now you’re all talking.
41:25 – 41:26
all talking.
41:26 – 41:27
And get to know each other.
41:27 – 41:31
The following week when we have the meeting, okay, now let’s, we’re going to sit down.
41:31 – 41:32
We’re going to talk.
41:36 – 41:40
I told him, I said, was that?
41:40 – 41:42
And, uh, Lennox Hill Hospital.
41:42 – 41:43
What are your thoughts?
41:43 – 41:48
What is it that they did wrong that can improve us, that helped us improve?
41:48 – 41:55
Because now we’re using an outside attack to develop a procedure that we can use to make us stronger.
41:56 – 41:58
In the eight months since that’s happened…
42:04 – 42:06
feeling good about themselves.
42:06 – 42:12
I said, it’s all, I wish I would have read Jessica’s book at that time.
42:12 – 42:14
And I tell them it’s all about human communication.
42:14 – 42:18
Because they’re all communicating with each other now.
42:18 – 42:22
Did everybody, can everybody hear, Did everybody hear that?
42:22 – 42:24
I mean, simple.
42:24 – 42:26
enforced and repeatable, right?
42:26 – 42:29
Because you did it month after month after month, right?
42:29 – 42:30
I yeah.
42:31 – 42:32
Fantastic.
42:32 – 42:34
I had a couple of really key things there.
42:34 – 42:42
One, you use the word trust, building up trust in those relationships, not just trust in terms of professionals, but trust in terms of people.
42:42 – 42:43
And then that
42:43 – 42:52
learning culture of once we’ve kind of built up some of that trust and we’ve got to know each other and feel comfortable with each other, then we’re going to sit down and we’re
42:52 – 42:53
going to talk about an incident.
42:53 – 42:59
We’re going to talk about what maybe went right, what went wrong, and we’re going to reflect on ourselves and what we can do differently.
42:59 – 43:05
Because you built up those trusted relationships, people are then going to feel more comfortable to do that.
43:05 – 43:12
So trust, learning, culture, and really building that wider culture of communication.
43:12 – 43:13
think that’s so important.
43:13 – 43:23
Then as well, if something goes wrong and you don’t know as an individual who to turn to, but you know a few people here and there, you can go and say, hey, this has happened.
43:23 – 43:24
I’m worried about this.
43:24 – 43:25
What can we do?
43:25 – 43:27
And put your heads together.
43:27 – 43:30
But so much of that comes down to culture and people feeling safe.
43:30 – 43:34
uh
43:34 – 43:40
I have to give some credit here, because what I’m about to say was from a podcast I filmed a little earlier this morning.
43:40 – 43:49
And the gentleman was the data protection and resilience engineer for Blue Cross Blue Shield South Carolina Hillman.
43:49 – 43:49
He’s here.
43:49 – 43:51
He’s great guy.
43:51 – 43:58
And when we were finishing, I said, hey, is there one thing you’d like to leave everybody with in the podcast?
43:58 – 44:02
Because he’s 25 years in data protection.
44:02 – 44:04
uh
44:04 – 44:05
He loves what he does.
44:06 – 44:14
And he said, you know, even when I started as a kid out of college on the help desk, to now.
44:14 – 44:19
Force yourself and put yourself in the middle of a situation.
44:20 – 44:24
Don’t be afraid because you’d be amazed who follows you.
44:24 – 44:35
And I thought that, you know, it’s not earth shattering, but the passion, not afraid to get in the firefight, and then it’s amazing who you bring along with you when that
44:35 – 44:36
happens.
44:36 – 44:45
And if you have that in addition to what you’ve done, where they’re meeting and get to know each other, the whole thing dovetails on itself
44:45 – 44:47
and good things happen, right?
44:47 – 44:58
I think my only challenge to that, and it comes probably partly back to culture, is it depends on the environment you’re doing that within, whether people will follow or
44:58 – 45:09
whether, and whether people will feel comfortable putting themselves within the middle of that environment or feeling like, as Zak said earlier, are people gonna point the finger?
45:09 – 45:13
Am I gonna put my head above the parapet and I am gonna become the one who’s gonna be blamed?
45:13 – 45:14
oh
45:14 – 45:19
And it also potentially raises issues of burnout.
45:19 – 45:24
If I’m always the one running to fix things, when is that going to start to take its toll?
45:24 – 45:27
You need some camaraderie there in the fire.
45:28 – 45:32
And if you’ve got the right culture that’s going to support you, then I completely agree.
45:32 – 45:43
But in organizations where there isn’t that relationship building, where there isn’t that empathy and trust, and where there isn’t that psychological safety of not worrying about
45:43 – 45:44
getting blamed,
45:44 – 45:47
I think that’s a very different scenario.
45:47 – 45:49
It’s a great point.
45:49 – 45:52
It requires multiple layers in the pyramid.
45:52 – 45:55
Not to be negative, I’m just talking about the optimism bias.
45:55 – 45:58
My God, what happened to the 80 %?
45:58 – 45:59
Remember?
46:07 – 46:08
stabilized environment.
46:10 – 46:15
of IT which are held by the corporate team and individuals business unit management.
46:17 – 46:21
services or things where things overlap.
46:21 – 46:25
So you just, is a very, very thin line how much you want to collapse.
46:28 – 46:31
separate things out because it’s a very tricky thing.
46:31 – 46:36
We are like almost like eight different business units who have their own agendas, own priorities.
46:36 – 46:39
Your centralized, you know, the corporate market is one.
46:43 – 46:50
together before you even make one change and things go down south, everybody’s just all in groups are ready to attack you right there.
46:50 – 46:54
It’s a very thin line there, when it comes to shared responses, especially cloud and everything else.
46:54 – 46:58
Like if you have one tenant, multiple subscriptions in Azure.
47:01 – 47:03
and have you separate those things out, right?
47:08 – 47:18
This is why I mentioned the whole share responsibility because that’s the crux of where we are faltering as an industry, And I think this is why we have so much headwinds of
47:18 – 47:23
regulations now talking about from DORA in Europe and all the other things.
47:25 – 47:38
Are forcing the boards to essentially get serious about resiliency and have provable metrics in place to demonstrate that yes, as an organization, we’re doing the right things
47:38 – 47:45
around keeping data secure and being able to show resiliency when compromises happen.
47:45 – 47:58
And I think as more more regulations become more stringent and the penalties become more severe, some of that unwanted but much needed cross-organization population and
48:03 – 48:16
And maybe not to squeeze, know, not trying to squeeze the AI term in, but if we do look at positives, we have got to free up some time for people to do that.
48:16 – 48:17
That is the real problem.
48:17 – 48:24
I mean, I think, right, between the lines, not only could it go downhill, but you just have to allocate time across those teams.
48:41 – 48:41
Anything else?
48:41 – 48:43
appreciate you both getting the train started.
48:43 – 48:45
We were going right there, so this is great.
48:45 – 48:47
Any other questions?
48:47 – 48:48
Yeah?
49:14 – 49:15
subject.
49:18 – 49:18
So.
49:31 – 49:32
Can I see it?
49:32 – 49:41
So yeah, so what happens when you do these exercises together with the whole team, right?
49:41 – 49:50
The trust you’re building there isn’t a trust that can be broken from an incident because that’s what you’re actually testing.
49:50 – 49:54
You’re testing for an incident that’s going to happen in the future.
49:54 – 49:58
So with that precursor, then
49:58 – 50:15
why would, okay, this is just me asking, why would any of the team members suspect someone else in the team if you do two of these exercises a year and they get told no more than
50:15 – 50:16
that.
50:16 – 50:21
They all get together and they see the incident happen.
50:22 – 50:33
You have a virtual thing that you talk about and what has happened and how it’s unfolded and what was breached, how it was done.
50:33 – 50:35
And you work through that through the whole team.
50:35 – 50:40
Then every one of them actually see the attack from wherever it comes.
50:40 – 50:52
So when it happens, when it like in reality happens, then I highly doubt that that trust is going to be broken because of the accident.
50:55 – 50:56
Yes, yeah.
50:56 – 51:01
Oh, after if you if you haven’t built the trust.
51:01 – 51:02
definitely.
51:02 – 51:05
Then it’s that’s actually a very good point.
51:05 – 51:13
That’s a very good point that if you haven’t prior to that, built the trust, then it’s actually going to be much worse.
51:13 – 51:14
Exponential problem.
51:14 – 51:21
Because when we were talking about this, I know from one of the companies that I’ve been
51:21 – 51:30
In South Africa, there was a whole wing of the company was, we’re talking about four and a half thousand employees on a site.
51:30 – 51:44
The whole wing was cut off from any network and the security guy, I wasn’t the security guy then, but he was pointing to the network’s guy and the guy, the network guy was just
51:44 – 51:45
pointing back.
51:45 – 51:49
For a few days, that whole wing was off because they couldn’t.
51:50 – 51:53
They keep on arguing whose problem it is.
51:53 – 52:00
And that’s network and security, which you would actually think it’s quite close-knit.
52:00 – 52:01
Yeah.
52:10 – 52:15
It’s probably good to think about trying to a solution and I think it comes top down.
52:15 – 52:21
So, when the highest levels of management are kind of supporting to get you out of the situation.
52:30 – 52:34
Of senior management backing us, supporting us through this.
52:34 – 52:37
I mean it was absolutely network insecurity.
52:37 – 52:42
Two different companies, it was like sister concern in the parent organization.
52:43 – 52:44
They had a side to side tunnel.
52:45 – 52:49
Any open between the two companies network wise, we had on the VPN.
52:49 – 52:53
The sister concern needed access only on couple application.
53:00 – 53:04
The guys from the network team are the guys who are from the security team.
53:04 – 53:06
Exactly the same situation.
53:10 – 53:21
oh And this is the idea of a just culture and either being restorative or retributive.
53:21 – 53:26
So when something goes wrong, do you look at what has gone wrong or do you look at who is to blame?
53:30 – 53:34
I I good good question.
53:34 – 53:36
I think good question.
53:36 – 53:37
think good question.
53:37 – 53:53
I good I I good I good I I think think good I good think it’s a good good it’s good think
53:57 – 54:00
Are you pointing the finger or are you trying to get to the root cause?
54:00 – 54:05
I think all of us have been talking about defense and depth.
54:05 – 54:15
I think now is the time to shift our, since we are in SHIFT to shift our mindset from defense and depth to trust and depth.
54:15 – 54:24
What I mean by that is so you’re trusting your people to do the right thing, you’re trusting your processes to build up, you’re also trusting the tools of technology at your
54:24 – 54:25
disposal.
54:25 – 54:28
To act in a way that you have planned.
54:28 – 54:35
For example, so you’re recovering something, but what if the ransomware attack or the malware attack has actually taken down or compromised your recovery plan
54:37 – 54:42
So let’s say you do not have an air gap solution and now your recovery environment is compromised.
54:42 – 54:45
So you’re recovering from an infected environment into fraud.
54:45 – 54:48
So you can’t really trust that data, right?
54:48 – 54:54
And I think this is why we’re going to look at, great, we have all these defenses, resume breach.
54:54 – 55:00
And if you’re resuming breach, then you’re saying, I’m going to trust everything implicitly or explicitly.
55:00 – 55:06
And then I’ve got to build this notion in my head that I want to trust it in depth, which means all of my
55:06 – 55:11
supporting processes should be trustworthy to get me to a trusted state.
55:12 – 55:16
And then I think that pre-appointed and excuse not me, all of that should go away.
55:16 – 55:18
It should go away, right.
55:18 – 55:20
Well, I,
55:21 – 55:23
we’ve only got a couple of minutes.
55:23 – 55:24
I absolutely appreciate everybody.
55:24 – 55:25
singing.
55:25 – 55:26
I really appreciate the questions.
55:26 – 55:32
We were hoping, we were like, my god, after all the chicken rolls, are people going to have questions?
55:32 – 55:33
And there it was.
55:33 – 55:36
So we couldn’t have asked for a better ending.
55:36 – 55:40
But I want to say an incredible thank you to two groups.
55:40 – 55:41
First, to Jessica.
55:41 – 55:43
Thank you, Ricky, Zak.
55:43 – 55:45
I mean, the miles traveled here.
55:45 – 55:47
Unbelievable, right?
55:48 – 55:53
I hope you were able to appropriately understand Ricky and I with our accents.
55:53 – 55:55
I mean, they have the good ones.
55:55 – 56:00
We just, you know, Dallas and Phoenix, I mean come on.
56:00 – 56:02
But more importantly, I want to thank you.
56:02 – 56:03
Thank you for taking the time.
56:03 – 56:07
I know this was a long session, especially right after lunch.
56:07 – 56:12
And even more importantly, thank you for coming to SHIFT, being clients at Commvault, and putting your trust in us.
56:12 – 56:14
We appreciate that a great deal.
56:14 – 56:15
So.
56:15 – 56:19
Have a great rest of your breakouts and we’ll see you out there.