Zero Trusts Given

If You’re Reachable, You’re Breachable: Modernizing Defense with Zero Trust

Episode Transcription

[Tom Tittermary]

Hey, everybody, and welcome to another episode of Zero Trust Given, your podcast that we do through Carahsoft, along with Zscaler and folks that we bring in-house to have conversations about Zero Trust and the DoD, as well as the civilian government and the defense industrial base. This week, one, first week of video, it's a little bit odd having to account for it. There's a little bit of light in my face.

 

I'm looking into a camera. It's a little different, but we're hoping that people like it. So one, we're on video this week.

 

And two, our guests this week, we've been trying to pull this one for a while, legitimately one of my favorite people in the industry. I could listen to this gentleman talk for hours and hours and hours, but one, I'm happy to also have with us Tom Gianelos.

 

[Tom Gianelos]

Hey, everybody. Hey, everybody.

 

[Tom Tittermary]

First time you get to see Tom Gianlos and Tom Tittermaray. But this week, ladies and gentlemen, I present to you our federal CTO at Zscaler, Mr. Hansang Bae. Hans Hong.

 

[Hansang Bae]

Thank you. Thank you, Tom. Thanks, Tom.

 

[Tom Tittermary]

Yeah. So I wanted to go right ahead and dive into conversation a little bit. So on the show, week over week, we tend to talk about zero trust relative to DOD quite a bit, right?

 

I wanted to take the opportunity, since we have you here, you've seen this whole thing kind of evolve out in the context of one of the key players in the industry. What have you seen and how is this different than other compliance requirements, initiatives, missions that you've seen kind of come across government and specifically to the DOD? It kind of falls in the same bucket, but it definitely feels different.

 

[Hansang Bae]

Yeah. I think there's a couple of factors. One, it's interesting because if I look at, you know, SLED, the whole landscape, it seems like local and state have moved further beyond federal.

 

So usually, like these big programs, federal kind of takes the lead and every kind of filters down. Here, I think because the right zero trust can be easy to deploy and also because it helps with manpower and cybersecurity, and it is a force multiplier if you do it right, and the locals have, I think, taken a leapfrog beyond the federal. So there's that.

 

I think the other piece is this is one of the few times where technology kind of lives up to the promise of kind of, hey, you don't have to make tectonic sea changes to adopt this technology, so you can move out as fast as you want. Whereas in the past, you know, I spent like 16 years at Citigroup, and I tried every, the latest, greatest thing, right? Data center in a box, network segmentation, the NAC, what I call Black Monday, where I black hole so many people with a NAC solution that wasn't quite ready.

 

And so this time, though, and we've all lived through the ISDN is another one, software-defined networking is another one, where in the Valley, it became still doing nothing, right? And so I think zero trust as a concept is good, obviously, but the technology kind of matured really, really fast and kind of kept up with the hype. So I think this is one of those times where internet is another one.

 

Internet had the luxury of being there. So we, back in the days, were using internet, but you had to be a Unix wizard, pretty much. If you don't know Unix, you don't know command line, there was no web until, of course, Andreessen, you know, created NCSA Mosaic.

 

And I was in Omaha, actually, we were just talking about Omaha. When we downloaded it, all of us in campus computing downloaded it, and we're like, oh, this is cool. This is so much better than Gopher and other text-based things that were there at the time.

 

And I remember help about, and he's like, my name is Mark Andreessen, I wrote this. I'm like, he's going to do well. Of course, he did really, really well.

 

So the bottom line is, there was that, right? So it wasn't like the web from CERN that, so yeah, the hypertext markup language was created. But again, it's a nerd knob, right?

 

No one's using that until Andreessen created the tool, right, that became a commodity. So zero-trust technology, I think, is now a commodity. Given the right solution, anybody can deploy it, right?

 

So I think this is one of those times where the technology hype lives up to the capability and the ease of deployment.

 

[Tom Tittermary]

Yeah, I think that we were having a conversation the other day as well, where we talk about technologies, there's land shifts, obviously, in technology, and there's model shifts and methodology shifts, right? And then I think over time, there are things added on to that shift that increase, you know, ability, efficiency, power relative to those things. And then finally, those structures, like post-land shift, post a lot of things being added, get so complex that they kind of crumble under their own weight again, right?

 

So this zero-trust to me feels like one of those shifts. And typically, I understand like a compliance challenge as I'm applying blocks and controls to the first thing, but this feels like both wrapped up in one. It feels like it's the new compliance model, but at the same time, it's like I'm baselining the shift from I'm managing boundaries to I'm associating risk and identity relative to the process as a whole.

 

So it feels like there's the opportunity through this compliance exercise to simplify and increase the power at the same time in the activity.

 

[Hansang Bae]

Yeah, absolutely. I mean, like every other technology, there's always, oh, who wouldn't want that? Well, because it's hard, because it takes, you know, if you have to be the smartest kid in the room to design it, there's no chance that you can operate it, right?

 

So the one thing I would say from a zero-trust perspective that's kind of unique is it actually gives you more information to operate efficiently and more securely. Of course, that's, you know, baked into it from a zero-trust perspective. But, again, this is not a technology where someone has to, you know, turn it into a rocket launch and have to be Elon Musk smart to do it.

 

In fact, it's the reverse. You get so much data that, and we'll talk about logging in just a second, that you can actually get lost in the richness of data. And I see some people over-rotating, and you're like a kid in a candy store.

 

And pretty soon, you know, you're diabetic and passing out, right? So there's that much level of information. So there is things that we have to be careful about.

 

Don't dial it to 11 on day one just because you can, right? Start slow. Start the foundation.

 

And if you get the foundation right, it's very easy to build on it. But if it's, the foundation's not right. And, again, this is the first time where I think technology has kept up with the hype.

 

[Tom Tittermary]

Yeah. And we get into the conversation of needles and hay a lot of the time. And this data conversation is, you know, the amount of data that you're having around the problem coming in.

 

And then we get into the conversation of turning data into actionable intelligence, right? So I'm consistently at the point where more accurate data is valuable if I can process it correctly. Zero Trust really changes the type of data that is meaningful to the actionable intelligence, right?

 

It's not just, hey, give me PCAPS relative to, and I've got a petabyte of that. Let me parse that relative to find this one interesting interaction. It's much more around, hey, what's the identity, what's the posture of the host?

 

What's the identity of the user? What's the geolocation of the user? From an attribute perspective to calculate some type of risk and then to basically make a contextual-based decision on access based around, you know, that first piece of things.

 

So I would argue is it's different hay, right? And there's less of it that's going to change, right? Because we're in that sea change right now where the model is changing and then people will add to it over time.

 

But I think it's interesting that mind shift in terms of what are the important pieces of data relative to this new model specifically?

 

[Hansang Bae]

Yeah, I think I remember when I was at Citigroup, a vendor asked, hey, what is it that you want? And I said, I want a hand that comes out of the monitor, grabs the operator head, and say, come look at this one. This one's important, right?

 

Because there's thousands of alerts going off the screen. And so everybody wants that data, everybody needs that data, but no one can really absorb it, use it, and action on it, right? And this is why, like, there's an IBM study that on average is 270 days before someone notices a compromise, 270 days, which gives, you know, these, well, bad actors the ability to slow down and hide in plain sight, right, where before it used to be you land and just it's like a bomb going off and you'd have to be an idiot not to see it.

 

Now they try two machines, two logins, and then they shut down, right? They're just hiding in plain sight. So having that actionable intelligence is something that everybody wants.

 

It's kind of like if I had a dollar for every time a customer said, I want to be more proactive than reactive, but I don't have time to be proactive because I'm fighting fires all day, every day. And again, the actionable data's always been there, but it just was a tsunami of data, and you're looking for that handful of transaction, right? That's why it's hard, right?

 

You're looking for that handful of transaction in a tsunami of transaction. So again, I hate to already start throwing buzzwords out, you have no chance when you have a tsunami of data, so you need AI's help. And they do that type of task incredibly well.

 

Like, this is not normal. Like, you know, when we play which one of these things don't belong or, you know, the difference between two pictures, as people, it's very hard for us to pick that out. But for AI, it can.

 

So I think, again, the data coupled with training that you can give to AI, I can, and with 3D printing, I'm almost at a point where I can get that hand to come out and say, come look, right? So anyway, so it's kind of intersection of my hobby and technology.

 

[Tom Tittermary]

So that, it's interesting. All the way back to the, this is Zero Trust Reference Architecture V1. There's a box in there that basically sits between logs and decision making, where there's an insinuation of AI.

 

And what's fascinating is, written three, four years ago, not super viable, heading to be more viable, right? So I find this interesting point where, yeah, more data is good. AI can help in that exercise.

 

But I see, like, the two main ways that people are going towards effective defense against bad actors. One is, how much data can I hand to a model that understands these environments from an AI perspective, to go pull those needles out of, you know, a Walmart full of hay, right? So that's one angle.

 

The other angle is, if I can change the game. So the standard TTPs, everybody knows this, is find the front doors, find a vulnerability for the front door, breach the front door, land on the network. And then, like you were saying, like, hit two, three boxes a day, try to find a soft host, and then I'm going to exploit that soft host, and I'm going to exfil data out of the environment, right?

 

So all of those exercises, everything I just said for every cyber operator out there right now, you're like, yeah, yeah, yeah. The notion around if I can disrupt the adversary's TTPs, to me, is the other main approach. One is find the needles in the hay and have better tools to do it.

 

We've done that, and the tools are getting better. The other is, specifically, is if I could disrupt that first step in the adversary TTP chain by taking away those front doors. And this aligns with the DISA Zero Trust Reference Architecture 2.0, software-defined perimeter. We've talked about it before. I'm hoping and praying that you give me the magical phrase that I've said a million times that you coined and authored around this individual topic. But the notion being like, hey, how can we disrupt that adversary TTP by removing that notion of the front door on that side?

 

[Hansang Bae]

Yeah, so, I mean, since you kind of teed it up, you know, I said it's right there on the sticker. If you're reachable, you're breachable. And that is the reality today, right?

 

So, you know, the front door, so what's the most obvious thing? Don't have a door, right? And now we have a technology that says, yeah, I don't have to have a door.

 

So, you can take a side. I'm not saying it's 100%. I'm saying you reduced your surface area so much that you've become a harder target.

 

You've bought yourself time and you can get your house in order without trying to do a heart transplant while running a marathon, right? That's what a lot of times in infrastructure. When you scale, DoD, one of the biggest networks in the world, if not the biggest, it's hard to make changes because the network is stable so long as you don't change it, right?

 

And that's why every outage is on Monday because change control happens on Saturdays and Sundays, right? And just a simple typo of an IP address is enough to black hole traffic. So, given that, trying to depend on the network, I get it.

 

I get it. I'm not saying network is not important. It's hugely important.

 

It's the only thing that ties everything together. But I always tell people it's like breathing. No one cares until you stop breathing.

 

And then you have about 90 seconds to fix it or you're dead. So, all the network infrastructure people out there, I'm one of your people. It's hugely important.

 

But that's not the place to do security. It's the ubiquitous fabric. Let it do its thing.

 

And then make sure that the end devices and the applications are protected. Close the front door. Don't even close it.

 

Make it disappear, right? Close it. Invisible.

 

Cling on, you know, Romulan shield, the cloaking device. Hey, Jamie, can we bring that up? Oh, no.

 

No, Jamie, here. So, again, the idea is technology can be simple. It doesn't have to be hard.

 

And the amount of data you get, you mentioned haze, needles in the hay. The way I think of it is if done right with modicum of AI training, and I'm not talking about nation state level. It's just a modicum of level training and AI.

 

You can have that MRI machine that'll suck out every needle in the haystack at scale, right? And then you'll be able to pinpoint. And the other thing I'll say, there's precedent here.

 

So, when you're a troubleshooter, 85% of what you troubleshoot, same-o, same-o. I mean, actually, you can just go, eh, it's that problem. It's that problem.

 

And you can just turf it off. But that 10% is not only interesting, but it takes a lot of cycles to figure out as a troubleshooter. And so, the idea was always, I wish I had a machine that could take the 80 to 90% of usual suspects off the plate of the operator so they can concentrate on the 10 that matter.

 

And I think now, again, with the right telemetry coming from Zero Trust, because it's not NetFlow data, S-flow data. It's not packet data. It's not log data.

 

It's all of it, right? It's the entire life cycle. So, I think once you correlate that in a meaningful way, and you can get rid of the 80, 90% of just the usuals, then you can focus in on that needle.

 

[Tom Tittermary]

So, where we're going with this, right, is I think hay model number one is what we've known forever is let me inspect every packet on every network and every flow and every case, and let me find anomalies relative to that ocean of infrastructure-required underlay network traffic. And then let me find anomalies in it, and let me make calls on action if I need to kick that individual asset network device off of the larger network, right? The shift that we're seeing in the Zero Trust side of the house is that there are less people kind of floating around the network, right?

 

And the meaningful traffic is the posture, the identity, all of that risk and attribute-based data that I can put into an identity system and get out of the posture of the host in order to identify risk, right? Because those people being on the network, since we've got the Romulan Klingon shielding due to the SDP model, they can't just, the fact that they're on the network doesn't give them access to the assets. They can't even see them in a lot of cases.

 

There's no access. So, the amount of hay changes, right? I need to look at all of the infrastructure data from the entire network infrastructure and make calls there, which is an ocean of hay for which to parse needles.

 

There's a more meaningful set of data relative to this exercise, which is the posture of the host, the identity, possibly geolocation, any other attribute I want to put in the system of identity. So, I want to get over to the AI side of the house, too, because it's interesting. So, I think the AI, introduction of AI into these individual models helps and makes that analyzing the network traffic underneath more feasible and reasonable.

 

And then, if you think about it, though, if I take that same AI and I put it against less baseline and more meaningful data, it could do more, faster, better anyway. But the interesting question to me is, okay, so now I've got an AI in either one of those models, and now we've kind of breached the boundary between the Oracle-style AI, not the company Oracle, but I go to it like a Greek oracle and I ask it a question that gives me an answer to agentic AI, right? I'm saying, hey, I want you to go complete a task for me in a number of steps along the line of.

 

So, now immediately comes the question of, the AI finds an anomaly that it qualifies as a red risk. What is my allowance acceptance and et cetera for it acting on that individual thing, and when do I need that individual human in the loop? And I just think that's going to be a big question, especially over the next year, 18 months, two years, and specifically in DoD, too.

 

So, I just wanted to offer that up, maybe parse around that topic a little bit.

 

[Hansang Bae]

So, there's some real world corollaries here. We have driverless cars already, and I remember, maybe it was like six years ago, I said, I know how software gets developed, I would never trust a driverless car. And a friend of mine told me, we were in Singapore at the time, and he said, Hanson, you remember back in the days when the car didn't have a seatbelt?

 

Like, yeah. When there was no crumple zone, yeah. No airbags, yeah.

 

No safety auto-braking, yeah. And of course, don't slam on your brake when you're first starting out driving, because you'll lock them up, right? Because you didn't stop driving, right?

 

And I was like, yeah. I was like, you didn't wait for the perfect car, right? Yeah.

 

And when you do the math, driverless cars will get rid of that 90% of accidents. Do I think we can transition from the IDS to IPS? That's always the problem that we're talking about, right?

 

Agentech AI. Do I let loose, and I think most of the folks growing up in IDS, IPS world, never did the IPS part.

 

[Tom Tittermary]

Let me, just in case, so identity. I'm sorry, not identity, but. Intrusion detection, yeah.

 

Intrusion detection versus intrusion prevention, right? So is the tool acting to enforce, or is the tool just notifying a human to basically do the enforcement, right?

 

[Hansang Bae]

And everybody that had IPS were like, eh, just notify me, let me deal with it. I think we're at a point where when you have machine learning and you have enough training data, and I'm not talking just your data. Zscaler sees 500 trillion signals a day, right?

 

So we can help. So when you see 500 trillion signals a day, you get to pick out, hey, the fact that this is a problem, 99.9999999% it's the problem, right? So go ahead and do those things.

 

And that goes back to that 80% of that stuff I can get off the plate so that a human can intervene in a meaningful way and not be the guy just putting away, pushing aside mountains of hay to find that needle. Right now, I have a little tiny mound of hay, and I can take my time to look through it. I think we're there.

 

And I'll say one thing, I fly a lot. I believe that air traffic controllers should be the very first thing we get rid of. Because, and this is why I'm saying this, it is a very rigid system.

 

What plane is on what taxiway, what runway, who's coming in, how fast they're coming. This is something that computers can do in an instant, and humans suck at it. Trying to keep this 3D model of planes in motion, landing, and with the recent, right?

 

So if you look, and it's a very closed loop system. There are only so many planes landing, so many in the wind direction. And I'd be the first one to say, have the human there in case something happens.

 

But that's the 80% of stuff that we can get off the air traffic controller's radar, literal radar, and let the AI do it. And at Gentic AI, I think we're pretty close. I think this is again where it'll leapfrog because, one other corollary I'll say.

 

When voice over IP came about, it was pretty rough. In fact, Morgan Stanley made news back in the day because they said, okay, we're pulling back 50% of our VoIP deployment, a little too early, and we have to have traders that, dial tone test was a dial tone test for a reason. Now, no one even thinks about it.

 

And the reason why VoIP became successful was because these little cell phones, initially, cell phones' quality sucked. And people got used to terrible call quality. So when voice over IP stuttered a little bit, people were like, eh, what are you going to do, right?

 

Now, because we use either Google or Series or The World, where you say, hey, send a text to Tom, right? And you all have Apple phones. If I say, hey, I'm not going to say it, but if I say it, I can make it, send a text out.

 

So we're now primed. So as an operator, we're primed to use, give agentic AI a little bit more freedom, a little longer leash to say, go do this, shut down this port if you see this, right? If Zscaler says 99.9% unusual behavior, shut it down, or, better yet, browser isolate them. They can still do the work, but they can't do damage anymore. And I think we're almost there.

 

[Tom Gianelos]

Sorry, I was going to wonder if, so have you been in one of those driverless taxis yet?

 

[Hansang Bae]

No, I would never get in, no. I have not, but the mathematical brain in me says, statistically, it's safer, right? It's kind of that old adage that the most dangerous thing about flying is driving to the airport.

 

Used to be, maybe not anymore with the recent string of, but mathematically, the number of cars versus flying, of course it's safer. So all those everyday scenarios, driverless car will stop, avoid, much faster than human. We just have reaction time, so.

 

[Tom Tittermary]

Yeah, 1,000%, going to not try to get philosophical here, but it becomes an ego question, right? It's like when you talk about, for me, when you talk about the human in the loop, it's like, well, I need a human in the loop. And the math behind that is, when the accident rate for driverless cars falls underneath the average accident rate for humans in human-driven cars, well, then the vehicle is safer.

 

That doesn't feel right, because there's this natural acceptance that you have to come to that there's something smarter than humans out there that can do it better. I had a moment where you were talking about, yeah, we need to let the air traffic, I immediately, I had a physical reaction to that, but it's that weird ego, but then at the end of the day, it is math at the end of the day, where when the ratio, when it's more effective mathematically on the other side, it's the other one. We were on vacation, take a brief sidebar here, but I swear I'm coming back.

 

We were on vacation with my family the other day, and we were somewhere on the East Coast at the ocean, and my daughter was like, yeah, I hear there's a lot of sharks here. And I just ran the math. It was like my exercise sitting on the beach, because I'm a massive nerd, this is what I do.

 

I ran the math on shark attacks and shark deaths in the United States relative to bovine attacks and deaths in the United States. Right, so now you get into, well, there's way more. Or pigs.

 

Cows are way more dangerous, and it's like, all right, well, I'm looking at the numbers. How many agricultural, how many farmers are in contact with cows every day versus how many people visit beaches every day? And the numbers just get crazier in the other way.

 

So it's that notion of, we have this emotional human thing about, I'm gonna fear the shark in the ocean, but at the same time, a lot of that is, there's gonna be math. I think the math's gonna have to be better than better. The math is gonna have to be, it's never gonna be perfect, but it's gonna have to be so near perfect for us to get away from that whole human-in-the-loop exercise, for us to kind of acquiesce and concede over that way.

 

[Tom Gianelos]

I don't know if AI has the level of creativity, though. It's going to find, like, Google Maps, for instance, right? It's gonna take you on the shortest route that it's calculating for any distance, right?

 

But it may not understand, as I learned yesterday going up to a trip to Maryland, that there was some big storms that rolled through and there's a lot of trees down on the road. And so I was behind moving, you know, tree cutters for the route that it took me, but mathematically, it said, no, this is the best route to take.

 

[Hansang Bae]

Well, the other thing is the thousands of papers, I imagine, on PhD and queuing theory, because when Google said, hey, I found a faster route, and then everybody swings, now staying here makes more sense, right? This is that, for those, you know, back in the days when people used to go grocery shopping with a card, like you'd go from this line to this line and you're like, ah, I'm stuck. So I think there is some of that where too many people just blindly follow.

 

But when it comes to the creative part of this, as a troubleshooter and packet analysis, I always tell people it's still about 70% art, 30% technology stuff, right? So as a troubleshooter, we, the human brain, have incredible ability to specialize in a very short amount of time. I'll give you some weird, off-the-wall examples.

 

I used to watch people play slots, and again, mathematically, it's the dumbest thing you can play, because you're just going to lose money, right? I saw people, and this is when actual quarters were used, reach into a bucket of quarters, and without looking, come out with five quarters, and they're all lined up so that they can go brrr, it's like fast loaders, right, for your magazine. And they do that again and again, because they have that muscle memory to do it.

 

If you take a lot of supplements and vitamins, what I found myself recently was that I can now, almost without fail, go dump the bottle, and there'll be seven pills there, right? And every time I'm like, huh, how did I, so we are very good at learning these big pattern and unusual things that AI would just, like Grok, smarter than PhDs, okay, I got it, until someone said, hey, what's your surname? And then it freaked out, it's like, the vapor lock.

 

So yes, AI can't do those creative, quick left turns, right turns in thinking, but for these closed-loop system attacks, well-known attacks, signatures of attacks, how it behaves, unusual behavior, it's a closed-loop system like air traffic control, and AI is infinitely faster than humans.

 

[Tom Tittermary]

I'm gonna, so, fascinating discussion, I'm gonna get back to Zero Trust, I'm gonna offer one more individual piece, because you got my brain rolling here. If you think about game theory niched, adversarial conflict-related game theory creativity, that's mechanically oriented, have you guys watched AlphaGo, or seen the documentary around AlphaGo? So literally AI from four years ago, Google AI AlphaGo, was generating moves, new moves that nobody had ever seen to creatively defeat an adversary in a known mechanical context, that people hadn't seen in two, how long has the game been played, 3,000 years?

 

But people were like, they couldn't understand what the AI was doing, because in that adversarial game theory style, competitive way, it was creating new moves, because it had the opportunity to play internally, more games that had been played in human history, of Go, to create these new moves and figure out what would work and what wouldn't work, only to, I'm gonna bring it back and say, only to go to my point of, hey, that's a fixed game where the rules are in place, I align that directly to the adversary's TTPs.

 

If you can change the game, if you can change the TTPs and change the mechanics of the exercise, that's an effective move, right? Because we talk about defensive AI in many of these cases, the offensive AI, it's like, I always tell my kids, I'm on the good side of the cyber battle, but there's just as many super smart people on the bad side of the cyber battle, you're gonna see the same thing from an AI perspective. So just the notion of, hey, one of the ways to think about this, if I could change the game, then even an AI has to relearn the models relative to how that, by the way, if I can change the game without explaining the new rules of the game to the adversary, win-win, right?

 

[Hansang Bae]

Yeah, I think this is why AI poisoning now is a thing, right? So you're not attacking, you're trying to attack the defensive system by attacking the, I hate to say, like Skynet, right? So if you attack the brain and you poison machine learning, this is the thing about AI, it doesn't know, it doesn't know right from wrong, it just knows that it's left or right.

 

Everything is left or right. And so therein lies a problem of, it can't, in a closed, rigid system, it excels. The minute it's like off the beaten path, it freaks out, because there's just no training model for it, right?

 

I'll give you a perfect example of this. I was in Korea in March, and I was using the Korean food ordering system, written in Korean. I grew up there, so I had no problem.

 

Reading through the reviews, and there was one review written in Korean, and the actual word was gut, gut, okay? So me, bilingual, that's gut, meaning restaurant, it's a restaurant review. So it's someone wrote gut in Korean that was translated to gut, you know, phonetically.

 

But the English AI translation said exorcism. So right there, it said gut, exorcism. And I was like, exorcism in a restaurant review?

 

What the hell is this? And I said, see it in the original Korean. I clicked on it.

 

And then I saw that gut, and immediately, and I laughed, because gut, phonetically gut, because that vowel doesn't exist in Korea, but gut is also a homonym for exorcism. I knew that. A seventh grader in Korea would know that, but this ginormous brain of AI had no idea.

 

It just took the homonym of, you know, and translated it as exorcism. So those types of things, AI suck at. Nuance is where AI will never catch up to a human brain.

 

Right, we'll never have a nuanced AI. But cyber attacks are not nuanced. The techniques can be nuanced, but if I see enough of them, I can train it.

 

So it's this constant battle of, have you seen it? Because once I see it, I can train it. So the game will forever be that nuanced attack.

 

[Tom Tittermary]

Yeah, so to bring this all the way back, man, we just, that was a super fun wide area tangent that went far afield, but I think we anchored it around Zero Trust in a lot of the ways.

 

[Tom Gianelos]

Absolutely.

 

[Tom Tittermary]

To bring it all the way back, right, to go back to the original topic, and talking about Zero Trust and DoD, and how this is different, and how it feels different, because it's kind of this new model plus compliancy exercise. Since we're talking to an audience, the audience, because there's a camera, that is specifically focused around DoD specifically, we have to talk limited, we have to talk detail, we have to talk air gap, we have to talk about these types of scenarios, and I've been involved in a lot of these conversations, you've been involved in a lot of these conversations. How is this conversation pivoted for those environments versus what people are typically used to in those environments?

 

Like, what's novel about that conversation there?

 

[Hansang Bae]

So I think there's a couple of different things. Number one, if you're talking about austere environment, you don't have access to gigantic pipes, you don't have access to AI models, right? So we still have to know how to work within these austere environments, whether it's in a backpack with limited compute.

 

So, you know, one of the things in the infantry, if, and I'll tell you again, like, quick side story. So we're out there training, and a buddy of mine brought this very high-speed, like, little kit to burn, you know, like a heater, like a Bunsen burner, but it collapsed, and it's, and when I saw that, the first thing I asked wasn't how much was it, where did you get it? It was, how much does that weigh?

 

Because in infantry, everything goes on your back, right? Unless you're mechanized. So I was like, oh, that's, how much does that weigh?

 

He's like, oh, that's super light. That's all I cared about, was super light. Another friend of mine walked by, and as soon as he saw it, he's like, hey, what is that?

 

Like, oh, how much does that weigh? So, you know, the whole idea here is that when it comes to cybersecurity, when it comes to, you know, the institutional knowledge and detail in an austere environment, we have to know how to do the basics. You can't say, I'll do a rocket launch for you, but I have no idea how I'm gonna get the spaceship from the hangar to the launch site.

 

So if no one's working on that, I don't care if you can, you know, do gravity assist around the moon and the sun and Jupiter and to go out to Kuiper belt. It doesn't matter because you haven't done the basic. So the detail is that how do we fundamentally shrink it down to work in an austere environment?

 

Now, there is the other side of this coin where every conversation sometimes goes to, okay, 11 EMPs go off. Everything's gone. Does your, can I still get to Zscaler cloud?

 

And like, I hope you have a bigger problem to solve than that, right? So sometimes it over rotates like, you know, people tend to do, we always go to the most extreme because again, we're very, you know, left or right extreme, not politically, just in thinking. So the question is, can I bring the basics and have MVP, minimum viable product in a detailed environment?

 

The answer is yes. You just need a stand in active directory or, you know, some identity provider, right? One of the things about Zscaler is that we're like water.

 

I'll find a way out. If you give me a string of identity providers, I'll try all of them, okay? And eventually I'll get to one like, oh, okay, I'll let you in.

 

Or you can say fail open. This is so important that even if I don't meet all the criteria, fail open, just let the user do it. Maybe let the approved users do it, right?

 

So there's fine grain control that you can apply very, very quickly. So the austere environment, details of the world, yes, it has to work. But again, it's not magic.

 

You have to have the ecosystem and the basic foundations all there for it to work. It can be in a smaller scale, but that's, you know, so before you go on a mission, you know the kit that you're going to carry out there. Everybody has a basic load.

 

So everybody knows, right? So as long as everybody understands that the entirety of DoD's address book is not available, then it works just fine, right? So it's managing expectations and understanding the mission environment and how austere it is.

 

[Tom Tittermary]

Yeah, it was one of the more interesting exercises I've gotten to go through in my time at Zscaler is when we were building DDEL with our product management and engineering out in California. So the DDEL capability is available for, I hate to talk products on here, but our main zero trust product is Zscaler private access, so ZPA specifically. So when we went to go have that conversation, it became a really interesting scenario because it's like, hey, Tom, we're a cloud service.

 

It's like, I know, but you need it to work when you can't get to the cloud, I know. So the question became, and we didn't specifically talk about weight relative to infantry, but absolutely we had the conversation about, is it more important that I have every feature and capability and widget that's in the cloud at the edge? Or what immediately became the exercise are what are each and every single one of, I talked to military personnel globally in many of these different scenarios to kind of qualify the requirements to make sure every individual use case they could come up with would be supportive of what we came up with.

 

And then the reverse engineering exercise became, how do I do that with the lowest CPU memory power requirement possible at the edge, specifically around that individual weight component to increase the amount of agility that we get there?

 

[Hansang Bae]

And sometimes good enough is good enough, right? I got bigger things to worry about. If I'm in a digital environment, yeah, of course cyber's important, but the threat isn't as much cyber as it is imminent danger to your life.

 

So again, the context, the frame of how we rate what's important needs to shift as well, right? And I think on the civilian side, that's harder to do because you haven't been downrange, so you don't understand what that austere environment may look like or feel like. So I think having institutional knowledge on the team to kind of convey that into civilian speak helps, right?

 

Because like you said to an engineer, like we're a cloud-first company, you're asking me to do what now? It's very counterintuitive. So when you explain why, as opposed to do this, I find that people go, oh, okay, I see what you're trying to do, let me help.

 

I'll figure out a creative way of doing that, and they did. So the detailed solution is there, ready to go.

 

[Tom Tittermary]

Yeah, it's interesting to me that one of the most important things, I manage engineers at Zscaler, one of the most important things is the baseline, right? So that the minimum requirement is to be able to have the intelligence and the subject matter expertise to know the right answer in an individual category. That's the science, right?

 

And then the art of it is to have meaningful impact relative to environments is to figure out that package, the wrapper, to take that absolutely incontrovertible one plus one equals two data and put it in and basically use it to affect change with other humans, to have that story be accepted. Like I was in, I'll give an example where a couple companies ago I was in California with a different company having a conversation with product management. We had a product that I could do a three pass crypto shred overwrite on something, right?

 

And I was letting them know, NISPOM, I'm going back to like NISPOM documents at the time, I needed seven pass overwrite. And they were, Tom, you're crazy. We don't need, so the story I came up with that actually ended up working it out.

 

Anybody remember the movie Argo? Okay, so when the embassy was stormed, they started incinerating all the documents. And I said, remember the incinerator, that's seven pass overwrite.

 

So I said, remember in the movie the incinerators broke? And they said, yeah. I go, what do they do next?

 

They go, the shredders. I go, three pass overwrite. And I said, how did that end up being meaningful later in the movie?

 

And it's like, well, they got a bunch of children in the area to put together all of the shreds. And so all of a sudden it was like, I kept banging my head against the wall. I just need seven because, but the story made it, now I, you can increase that understanding across individuals.

 

I just, I find that comes up so often and you do it so well. Like if you're reachable, you're breachable. The metaphors I see you write relative to these individual customer conversations.

 

Like the packaging of that underlying scientific truth over two to kind of meet that bridge just makes a huge difference for me.

 

[Hansang Bae]

Yeah, I think it's hard to sometimes have people see, I use analogies because people can relate to it and they understand it, right? So I remember reading something where they said, the reason why we can remember stories we read as a seven-year-old, but I can't remember this PowerPoint deck from yesterday is because we all remember stories. And I think this is still on YouTube.

 

Everybody can look this up. National, Nat Geo, National Geographic when cable was a thing, had a show called Brain, Brain. Brain Games?

 

Brain Games.

 

[Tom Tittermary]

Okay.

 

[Hansang Bae]

Okay.

 

[Tom Tittermary]

I remember Brain Games.

 

[Hansang Bae]

So it was fascinating like how there's a subconscious brain and one of the things was, here are a list of 10 things. Remember it. And all my kids and me, we all like, and I think we got like three or four items.

 

And then I said, okay, your brain is, sucks at memorizing random items. So then how about this? And I still remember the story.

 

There's a guy who works hard, took his lunch out in the breakfast, right? Put it in the lunchbox, went to the subway, came out of the subway and saw that it was raining and took out his umbrella and opened the umbrella. And as he was crossing, he saw a friend who was wearing brown shoes and said hello with a hat.

 

And those were the words on the list, hat, the lunchbox, umbrella. I watched this like seven, eight, nine years ago. I still remember it because it's a story.

 

And the way the brain neurologist explained it was, when I give you a list, one core, literally one core of your brain tries to memorize it. If I tell you a story, multiple parts of your brain remember, oh, I remember I walked, what it feels like to rain when you walk in the rain. I remember the smell of the subway, the sound of the subway.

 

So all these different parts of the brain gets involved and it cements it into your memory, right? So analogies work because in technology, it's even worse in DOD because there's more acronyms in DOD than there are in technology. So you put the two together, it's almost impossible.

 

So how do you make executives or people who don't do this for a living understand the importance of something? So when you have something like, if you're reachable, you're breachable, people immediately go, oh, I get it. Like, I know what that's like.

 

So sometimes, you know, one of the things that I hate about PowerPoint is that people forget the PowerPoint isn't for you, it's for the audience. And too many people use PowerPoint to remind, as a speaker, oh, I gotta talk about that PowerPoint, I gotta talk about that PowerPoint. And so it's very dense.

 

Those are good leave-behinds, but one of the things that, again, I tell people is that if you throw up words and numbers on a PowerPoint, people read, and if they read, they can't listen to you. You think they're listening, they think they can do both. Human brains cannot multitask, proven beyond a shadow of a doubt.

 

So if you throw up a lot of words, don't speak because they're just reading, you're wasting your breath. But if you put up one or two key words, they're all thinking, what is that? And they'll pay attention to you, right?

 

So I don't know how we got on this PowerPoint thing, but again, using analogies, especially in a highly technical field, helps because it's a frame of reference that people understand.

 

[Tom Tittermary]

Yeah, little visual things, and I find humor and or, if I could attach it to some level of emotion. So we created our big ServiceNow integration, and we called it Master Blaster. And I think anybody that's seen any of those Mad Max movies remembers Master Blaster, it's the small gentleman sitting on top of the large gentleman, and you say, oh, well, the small gentleman's ServiceNow because they're the brains, and Zscaler's the big gentleman because the brain tells Zscaler who to go hit with a bat from an enforcement perspective.

 

It sticks. It's silly, and you feel weird talking about it in a technical scenario, but there's not a person that I've kind of explained that out to, and they're like, oh yeah, Master Blaster. And then their brain starts going, oh yeah, and the integration here, and then there's the amount of technical detail that's involved and immediately referenced.

 

It's like when you, there's a smell that corresponds to memory, right? Like the notion of the visual kind of cues the technical interlay underneath.

 

[Hansang Bae]

That's right.

 

[Tom Tittermary]

Okay, again, guys, we're all over the tangents today. Hopefully this is entertaining for everybody out there. So one of the interesting talk tracks, we've gone back and forth, and we both kind of lament this, and we're gonna sneak back into the AI category, is anybody that's heard me speak over the last 20 years, and they say, hey, Tom, what's the hardest thing about cybersecurity?

 

And I say the users, right? Because the users are out trying to do good things, and in doing good things, they will find ways that you, as an IT administrator, did not expect to accomplish those good things, which sometimes come across as security events, right? But I will always say the hardest part is the users.

 

We always talk about these irresponsible users clicking on these crazy clicks and links, and I talk about constantly how the best product we had at my time at Symantec was Norton Antivirus, because, you know, grandmom will click on anything, and to me, that's threat intelligence from a corporate perspective from Symantec. But just on that topic a little bit, this notion of AI is getting good enough on the adversary side that the term irresponsible user almost isn't even fair anymore.

 

[Hansang Bae]

Yeah, yeah. I think this is, again, where technology leapfrogs. And I kind of chuckled when you talked about grandma's computer and Norton Antivirus, because that same grandma might not have gotten a virus, but I'm sure she had like eight toolbars, where half the screen is Ask Jeeves, Bing, you know, Ask This, Google, you know, Yahoo, you know, some other, like, you know.

 

We all remember when toolbars were a thing, right? And because they don't know anybody, they're like, okay, and then half the screen is just eight different toolbars, because the game was changed. It's not about malware, it's about getting eyeballs, putting that toolbar, and they didn't know any better to do that.

 

So in terms of irresponsible user, this is that, if you're a grandma, we apologize, but it's a good analogy. There used to be a case where, why did you click on that? Why did you run that?

 

We told you not to run on. Now, the fakery, the imitation is so good, that none of us, all of us here, with 40, 50, 60, 100 years of experience collectively, we would click on it, right? There was one Wells Fargo attack, where I got texted, called, texted, emailed, that almost fooled me, because it was so well-coordinated.

 

And so the idea that there are irresponsible users, they're not, they're unintentional. So you're not dealing with a malicious insider threat. Those, you have, again, techniques to catch them, and hopefully you all can.

 

But now, every user is an unintended malicious user. They don't mean to do you harm, it's just they got tricked because it's so good, right? There was a, I remember there was a intra-ID login page that was so good, that when our Threat Labs people showed it to me, I was like, yeah, of course that's legitimate.

 

It wasn't. Behind the scenes, there were like 13 different things that were different. But how would you know?

 

It's a visual thing. How would you know is, and like what my wife always says, how do normal people do this, right? If I do something, she's like, how do normal people supposed to do this?

 

And that's the dilemma, is they're all normal people, they don't mean you harm, but anybody can be phished, because it's gotten so sophisticated.

 

[Tom Tittermary]

Yeah, so part of this goes back to what we were talking about earlier, where if attacks are getting that sophisticated, and users are, there used to be a giant gap between us as security professionals and the average Joe Smo on the street, with regards to the ability to parse this thing out, that's shrinking, right? So if I think about, we were talking about the two models where there's a Walmart full of hay, and I have AI parse the needles in the network piece. That's way later in the kill chain, right?

 

In terms of the host has gotten pwned, and the user has used the identity to land on the network, and they're moving laterally around the network, like the catch mechanism over there. There's an argument that with these new AI models, it's critical that you catch these individual aspects sooner, either on the way to the device, or interacting with the individual host posture system to be able to tell that that individual device is pwned. That way, if we're moving into this real zero trust model, right, I can get it that way earlier, and I can keep those assets obfuscated to that device that has been bad, versus figure out on the network, once they already have these unobfuscated access to these things, and then try to kick them off it at that point.

 

[Hansang Bae]

Yeah, or at least start, slow them down, right? So you can carpet bomb them with honeypots, so that the, whatever the malware or the attack is, sees these juicy, target-rich environment, but they're all fake, so you can do that on demand. Those are some of the techniques, and the job isn't to, you'll never be 100% successful.

 

Just statistically speaking, there's a 300, actually, I'll tell you a story. Back in the days, I won't say where or who, but they're doing a change control, a system administrator missed one patch on, you know, that 200 servers for this application, one was skipped, and every one of you that done change control at 2 a.m., it's very easy to go, oh, okay, I'll come back to it, and at, by 3 a.m., you're so tired, you forget. So one server was missed, and the head of security told me, Hanson, can you believe it?

 

Like, these, you know, he cursed, but these bad actors, not the word he used, found one out of 200 server. What rotten luck is that? And I said, hey, you're doing the wrong math.

 

It's 30 million versus 200. Of course they're gonna find it, right? Because these are all bots attacking you.

 

So as humans, we have a hard time picturing these large numbers.

 

[Tom Tittermary]

Yeah, it's the 10,000 monkeys with 10,000 typewriters in 10,000 years will come up with the works of William Shakespeare.

 

[Hansang Bae]

And so, and people are like, well, how can that possibly be? Well, you know what, I forgot, was it Milligan? I think it was Milligan, 1950s, right, did an experiment.

 

Everybody knows the Milligan experiment where, hey, zap the electric, and people did it. Not that experiment. So he said, in Kansas City, how many postcards could you forward to find this very specific banker in Boston, and it took seven hops.

 

That's where seven, the bacon, you know, seven, what is it, seven degrees of separation? Yeah, seven, yeah, of degrees comes from. And people were flabbergasted.

 

How can a random person in Kansas with just seven hops get to a specific banker in Boston? And we can't do that math. We think it's like, no way, except how many people do I know that I could reach out right now?

 

Probably 100, conservatively. And how many do those 100 people know? Another 100.

 

So you just went from two orders of magnitude, four orders of magnitude, by seven, it's like half the world, right? And those are the things that we as humans suck at. And again, going back to the agentic AI, AI is not impressed by large numbers, nor does it get confused.

 

And that's the secret of AI, well, machine learning. And why AI can do its job is because they're not, they don't get confounded by, what do you mean it's order of magnitude bigger, and two orders of magnitude bigger, and three orders of magnitude bigger. And, you know, again, there's a precedent to this.

 

So back in the days as an engineer, you do order of magnitude analysis. If you're looking, if you're thinking intuitively, no tables or calculations, intuitively you think, okay, I think it's gonna be 1,000 newtons, okay, around there. And your answer comes out to a million newtons, you know you screwed up, okay?

 

And so now we're doing that a billion times a second. So we can be very confident in that order of magnitude analysis. The other one is unit analysis.

 

If you're looking for kilopascals, and your answer comes out in moles, millimoles, you screwed up somewhere, right? So we, it takes us about 30 seconds to a minute to do that. If you're doing machine learning with 500 trillion signals, you can do that a billion times a second.

 

So again, what I'm looking for from an AI assist, that's a good name, AI assist, is 80% get it off my plate. Usual, same old, same old. Again, highly structured, closed loop environment, I trust you all day, every day.

 

Everything else, punt to me. Until I see enough of that, and that falls into that highly contained, restricted environment that machine learning can just regurgitate and make its own and train itself up. And that's what I want, AI.

 

And I think agent tech AI is very, very close to being a reality.

 

[Tom Tittermary]

It's an interesting thing, right? It reminds me of, my wife says about me all the time, I have a lot of specialized knowledge, but very, very little common sense, and she's right. It's not unlike AI, right?

 

And here's the model is, I think that the model that we've looked at for commercially available AI is strict human AI interaction that sometimes totally drops the ball on common sense. I think, again, no super young people at the table, we all remember a time where every company had warehouses full of people that were in phone customer service. Is there a model for AI to say, hey, I need an extra second, this is too dumb for me.

 

Let me toss it to a human. Like, give me 30 seconds, maybe there's service level. I know all the super smart stuff.

 

Is there a business model for Grok to say, surname, give me a second, Bob, what's my surname? And then to complete that cycle. Part of me just wonders there, right?

 

I don't know if AI figures out the human condition and the nuance in the human condition sometime soon. Maybe there's a way to shortcut that from a business model.

 

[Hansang Bae]

Yeah, I think that, actually, that's a great idea. I think that, again, that hand that comes out and says, come help me, or look at this, I think that would short circuit. And that's the whole back propagation in machine learning is I found something, I'm going to go back to all the compute nodes behind me and say, hey, I found the answer to do this next time, right?

 

So that's one of the strategies. And so if you short circuit that with, I give up, I don't know what this is, and a human jumps in, and this, believe it or not, again, random facts, I'm like you, I just know a lot of random facts, and one day I'll be on Jeopardy, and who are two people who've never been in my kitchen? If you know that, you can comment.

 

But, so, when the postal, I always wondered about this. I used to always think, how good is the OCR system of the postal service that I can scribble, and it gets there? I've always wondered, like, they must have this NSA level OCR engine that is a super secret trade intellectual property.

 

It's not. If the OCR, regular old OCR can't read it, which I imagine is quite a bit, they kick it out, and they send it to a station in Omaha. See how the whole thing's coming together?

 

We're talking about Omaha. And in Omaha, they have a contact center, they used to have a lot more, where they have a specialized keyboard, because speed is key here, and they get a picture of the envelope. They don't physically mail the envelope there, it's just they send the picture of the envelope, and a human looks at it and goes, oh, that's Eaton Street in San Jose, because of the contact, right?

 

So you type that in, and then machine, the next time that person writes the letter, guess what, there's a match, right? So that's the kind of the old school way, analog way of, hey, I can't do this, give me a hand. If we could do that with AI, and again, this is kind of what I was talking about, if 80% of it can be taken away, the 10 to 20% that I'm concentrating on that requires nuance, I can tell AI, next time, don't bother me.

 

[Tom Tittermary]

I just realized we totally inverse the human in the loop model, where the human in the loop isn't there because the AI's done, the human in the loop is there because the human's done.

 

[Hansang Bae]

Yeah, that's true. Well, not as good at certain tasks. Correct, right.

 

But, I just did this last night. I don't know if everybody's seen it, I don't know if you've seen it. It turns out, if you, for about 55% of the population, which that's more than the majority, if the first letter and the last letter is correct, and there's some semblance of the word, you can read it almost at speed, right?

 

[Tom Tittermary]

I almost bought a T-shirt in this model the other day that was exactly what you're talking about, and it's weird because you'll critically look at it later and not understand how your brain got there, but it's surprisingly intuitive.

 

[Hansang Bae]

Yeah, and so, again, when I was learning about how the brain works, it's because to protect our brains, we have a filter, the subconscious filter, that says all of this stuff, you don't have to pay attention to. Pay attention to this, right? And another fascinating brain games story is that the reason why motorcyclists get side swiped so often at nighttime is when we see one single headlight, we're so used to seeing two headlights of a car that if we see a single headlight coming in a mirror, and you look and there's a single headlight, the brain says, eh, something went wrong, ignore it.

 

And people actually can't see that light because the subconscious brain filters it for your conscious brain to calculate. Otherwise, we would be analysis to paralysis because there's too much feedback in the real world. This is that problem that we're trying to reverse where AI is so good at doing the billionth of a billionth of a billionth analysis, but then one little loop and it locks up where we are very good at that.

 

So it's a combination of two.

 

[Tom Tittermary]

People aren't gonna believe this is a 100% true statement. Tom and I were having a very similar conversation in the city of Omaha in a food hall in Omaha sitting over a table. So I meditate all the time and I think about what I think quite a bit.

 

We were having a conversation about, if you think about it, your conscious experience is a series of physical sensors, and those are the sight, sound, all of those types of things, right? And all those sensors are providing some level of one and zero data to your brain, which is basically the processing unit, right? And the kicker is there's firmware in that brain that some of it is learned experience over time and some of it is evolutionary caveman, big bang, startle shock type of thing, but that is a filter.

 

It's absolutely a filter. So there's this gap between what I see here in terms of actual real reality and the perception of it. And so what gets really interesting is that's what's missing in AI in a lot of those cases, right?

 

The baseline human experience firmware around that. What's the, not BrainGate, that was the original model, but what's the other Elon Musk company where they're trying to, Neuralink?

 

[Hansang Bae]

Yeah, Neuralink.

 

[Tom Tittermary]

Maybe there's something there. Maybe we start a work share program with the humans and the AI where we put in the individual firmware memory components.

 

[Hansang Bae]

Yeah, I mean, this is like the postal model where the OCR can't do it, humans exist, and AI is like, just punts it, like give up, right? Instead of trying to figure it out because we as human, because back in the days when AI was a nascent thing, what was the one example, right? Here's an apple.

 

Okay, I can identify every apple until I take a bite. And then the AI is like, I don't know what that is, right? Whereas a two-year-old would know.

 

So we're way beyond that, I get it. But still, there are these seemingly trivial things that AI trips over and people lose all confidence. But in cyber, again, it's a closed-loop system with very rigid rules, and AI can do that a billion times faster than we can, yeah.

 

[Tom Tittermary]

Yeah, it's more Elphigo than painting a picture that a human would find beautiful, right? There's fixed rules to the game. If it's, yeah, if the, it's making sure that the defensive AI understands the rules and then you jam up the rules for the offensive player.

 

[Hansang Bae]

Although now, you know, it's interesting is that they have done studies where even infants know pretty and ugly, right, when it comes to face, when it comes to, because it's just, there is a mathematical formula for what defines the symmetry. Yeah, it's the rule of third, where your eye socket is. And universally, it's not a thing.

 

Universally, people like prettier, uglier, right? And universally, again, I'm like brain games cheerleader here but the other thing was people, they show two pictures, who's the most trustworthy? Almost without fail, people like more trustworthy, just from visual cues.

 

Because we have this institutional knowledge that we've dealt with all our lives. We do this all the time, so another couple of reasons. Woodworking, I love woodworking, right?

 

Your fingertip is capable of 1,000th of an inch difference. So when you're lining up a part, you can use the square, you can use the combination square and all of that, or you can just use your fingertip because it's 1,000 accurate, okay? That's how sensitive it is.

 

So pro tip, when you're drawing blood for that, you know, that when they prick you for blood, don't do it on the pad where all those nerves are, do it to the side because otherwise it's annoying. But that's machine learning on our part. You know, again, when you draw out those quarters for the slot machine, that's machine learning at work.

 

And so our ability to look at these nuances are lifetime of experience. The other thing, and you can do this, is that human hearing can differentiate up to one degree of difference. So in a 360-degree arc, if you move the sound source by one degree, human ear can detect that and go, oh, it moved, right?

 

It seems like impossible, but again and again, test shows. Until, do this at home, put some clay in your ear. You have no idea where the sound's coming from because these little folds, it's unique to everybody.

 

We machine learned it as a kid to hear a sound to turn and we get better and better and we've done so much iteration of that. Think about the sound. All day, every day, and we triangulate, right?

 

So just go home and get some Play-Doh and put it in your ear in the nooks and crannies here and then have somebody do a sound and you'll be like, ah, over there? So we do this. Machine learning is a thing.

 

We do this with our senses. We're just slow at it and it takes us like five years because little kids can't really triangulate. Machines, again, do it in a second.

 

So how could we ever compete against something that can do it in a billionth of a second for those routine tasks, right? Again, if it's a closed loop and rigid system, they excel. Everything else, they suck.

 

[Tom Tittermary]

Yeah, the one other, guys, we have conversations, we'll talk about what we're gonna talk about. The other topic that I was thinking of bringing up and having the conversations around was about IP addresses and their relevancy in long-term future in this interesting zero-shift model going forward. I know you had some thoughts there.

 

I'd love to dig in on that a little bit.

 

[Hansang Bae]

Yeah, I think, again, as a network engineer, you think the world in IP addresses, right? And the truth is we don't even look at the source. The whole point of routing is you look at the destination and you send it there.

 

And there have been attempts to do reverse lookups of the source. Every time you do that, it slows down considerably with the exception of multicast. So for those of you who are well-versed in multicast, yes, there is the ability to see where it came from because source is important.

 

Other than that, no one looks at the source. So it's a very limited set of information. It's now meaningless because what would you rather have?

 

An IP address that may or may not have been NAT-ed 10,000 times, which may or may not be correct because source can be spoofed, or would you rather have the identity of the user? No question, identity of the user. It's not up for debate.

 

It's rigid and it's factual and it's a closed loop. I know you're Tom T. And I know what application, not by port number, by URL, the entire URI to the last byte and the entirety of that transaction.

 

So given that, why do you care about the IP address? And the analogy that I use is UPS. Delivering the package is not what's important.

 

Getting that product to a user, they can open the box and take out the device and use it safely and it's not broken. That's the important part. And so we live in a world where we fixate on the packaging.

 

It's the actual packaging. It's the letter. I don't care about source and destination, whereas other than, oh, who sent it to me?

 

Other than that, and it can be faked, so it doesn't even matter. So why fixate on that address that have low fidelity when I have this richness of data? So I think, again, this is a mind shift though because the one thing about troubleshooters is that to switch from one tool to another, because you have human capital, you've become very good at it.

 

You've machine learned your brain to work very efficiently in that tool. And it took me, I can tell you stories of when I went from Network General Sniffer to Ethereal, later to become Wireshark. And I'm laughing because I was a Sniffer bigot.

 

And it was DOS space. It wasn't even Windows. And I can tell you F7, two clicks to the up, three clicks to the right, type in the address, super fast.

 

Until Kristen, who I hired into the team because she just had this knack for troubleshooting. I saw her do something on Ethereal, and I was like, hey, how'd you do that? I was like, Ethereal, that's my Sniffer.

 

This is a Cadillac, baby. And every day I was like, wait, how'd you do that? Wait, how'd you do that?

 

And it took that for me to switch. What I'm saying is for a troubleshooter to switch, for a network operator to say, I have a better thing, it had better be monumentally better, right? Because switching tools for a troubleshooter is like taking somebody from one religion to another.

 

Probably not going to happen. And now with Zero Trust and the richness of data and the linkage of all the data, somebody's linking all of that for you to give you a complete end-to-end picture, then it's a game changer. And everybody's like, oh, that is so much better than my tool that looks at IP addresses and packets.

 

One funny story about packets, I was troubleshooting a high-performance compute, and I said, give me the time of when this happened. And they said, I don't know, between like 12 and 12.30, and I was like, there's like 300,000 packets in 10 seconds. So if I were to look for 30 minutes of that, I would be doing that for a month.

 

Can you break it down to within a five-second window? Because that's 1.5 million packets, right? So I'm having a stroke, or did the lights go out?

 

[Tom Tittermary]

It just hit me too, the light behind Tom went out. Tom just had an evil thought, so the light on his face was turned down.

 

[Hansang Bae]

We can get that Batman thing in there. We are definitely going to get that in there.

 

[Tom Gianelos]

Dun-da-da-da.

 

[Hansang Bae]

Yeah. Dun-da-da-da. Hopefully you'll see that, it's an inside joke that we just talked about.

 

But again, the idea is that it's so incredibly hard for human capital to go to somebody and say, I know you're a specialist in this, go try this. It had better be monumental. For me, the last time that happened was Network General Sniffer to Ethereal to Wireshark, which I now just love packet analysis.

 

But we have something better, and this, somebody connected the dots, and giving it to you in a nice little bow, and you just have to open it and look at it and go, oh, okay, that makes sense. Again, 80 to 90% of that got taken care of, and this is the part that I can concentrate on. And so that, as a troubleshooter, it's a game-changer.

 

[Tom Tittermary]

Yeah. Well, Hansang, thank you very, very much. This is, I feel like this is one of the better ones we've done.

 

This has been great. And I'll open up to the folks out there, now that I'm looking right at the camera. I'm supposed to talk at the camera when I'm talking to you all.

 

I was very anti-video. This has been less painful than I would assume that it would have been. So maybe expect more video from us in the future.

 

Maybe, maybe.

 

[Tom Gianelos]

This was Hansang's idea.

 

[Tom Tittermary]

Let's not throw that among the dots. It sounds like, I feel like it was a good idea. The feedback will be interesting around that one.

 

[Hansang Bae]

And we did it live.

 

[Tom Tittermary]

Right. Did it live. On the topic of feedback, if there's anybody out there, and you think you have comments about the show, we'd love to hear comments about the show.

 

If you have questions that you think are interesting Zero Trust questions for DOD, and you would like to have them read on the show, and have us talk about them on the show, I am willing to bet that we will get a solid care package out to individuals where they provide solid questions and we address them on the show. The email address to send those questions is zerotrusts, with an S, given, zerotrustsgiven at gmail.com. And as always, thank you very much, Tom.

 

[Hansang Bae]

Good to be here.

 

[Tom Tittermary]

Another amazing episode. Thank you so much, Hansang Baek.

 

[Hansang Bae]

Absolutely, thank you. Thanks, Hansang.

 

[Tom Tittermary]

And thank you guys very much for your time. See you next time.

 

[Hansang Bae]

Thank you, bye.