Skip to content
InTechnology Podcast

Top of Mind for CISOs in 2024 (193)

In this episode of InTechnology, Camille gets into what CISOs, or Chief Information Security Officers, should pay the most attention to in the coming year with Jonathan Nguyen-Duy, Field CISO at Intel. The conversation covers why the cybersecurity industry is still struggling, changes in regulations for reporting breaches, and how CISOs can move from blind to prescriptive security strategies.

To find the transcription of this podcast, scroll to the bottom of the page.

To find more episodes of InTechnology, visit our homepage. To read more about cybersecurity, sustainability, and technology topics, visit our blog.

The views and opinions expressed are those of the guests and author and do not necessarily reflect the official policy or position of Intel Corporation.

Follow our host Camille @morhardt.

Learn more about Intel Cybersecurity and the Intel Compute Life Cycle (CLA).

Why the Cybersecurity Industry Is Still Struggling

Jonathan begins by explaining Verizon’s annual breach report, which he worked on during his 16 years at the company. This report comes from a partnership of over 100 organizations, both private and government from around the world, and it compiles publicly available information about thousands of data breaches, breaking them down into nine common threat vectors. Based on this data, he shares that the cybersecurity industry is still blind to reactive towards cyberattacks and not proactive and predictive as it should be. Jonathan notes three reasons for the widespread failures in cybersecurity despite more money and industry jobs than ever before—a failure of asset management, inadequate security awareness training, and poor configuration management. He also iterates that cybersecurity and networking follow compute, meaning that CISOs are now seeing how security cannot be tackled in silos in the face of AI and cloud computing. Delivering a positive digital experience will require networking and security to be aligned with the cloud.

New Regulations for Reporting Breaches

Until just a few years ago, there were no legal regulations or compliance requirements for companies to report their data breaches and cyberattacks. Jonathan shares how the GDPR in the European Union was one of the first regulations to go in place in 2016. Before, organizations would either report breaches of their own free will or they would be anonymized when declaring. In the U.S., recent SEC guidance now requires data breaches to be reported within 96 hours from the time a breach was determined to have a material impact on a business. Jonathan continues that because of the SEC guidance, CISOs or a company’s security team have to decide if attacks need to be declared. This process could involve other executives like the CFO, CIO, COO, or even the CEO of a company.

How CISOs Can Move from Blind to Prescriptive Security Strategies

After reviewing the annual breach reports for over 20 years, Jonathan says he noticed most approaches weren’t new. CISOs have continued to focus on dimple digi-media controls, a rigorous approach to configuration management, asset management application vulnerability management, and a rigorous approach around multi-factor authentication, and zero trust practices. He emphasizes that CISOs can demonstrate the effectiveness and maturity of security programs by their ability to address the known knowns through approaches like having properly patched update devices, knowing what’s in your network, making sure people are properly trained, and implementing simple digital media controls. Jonathan also underscores the importance of zero trust bases around SASE and zero trust network access with built-in automated posture checks.

Camille and Jonathan also touch on privacy, critical infrastructure, and AI. When it comes to privacy, CISOs also now have to consider how to protect not only intellectual property but also protecting users’ data and others’ intellectual property. Critical infrastructure is mostly privately owned and not public, which Jonathan says means CISOs must take on a prescriptive approach starting from given descriptive standards. Jonathan then discusses the move to converging platforms for data analytics and the growing role of AI in the DevSecOps space.

Jonathan Nguyen-Duy, Field CISO at Intel

Jonathan Nguyen Duy CISO cybersecurity zero trust

Jonathan Nguyen-Duy has been Field CISO at Intel since January 2024. He brings with him decades of experience in other senior cybersecurity and strategy roles, including Vice President and Global Field CISO at Fortinet as well as CTO of Global Security Services at Verizon. Jonathan is currently an Advisory Board Member for BLAKFX, GroupSense, Nok Nok, TechTalk Summits, and mePrism. He is also a Cyber Committee Member at AFCEA International. His education includes an MBA in Marketing Information Technology & International Business from The George Washington University School of Business and a degree in Applied Information Technology from the U.S. Foreign Service Institute, George P. Shultz National Foreign Affairs Training Center.

Share on social:


Announcer  00:00

If you’re listening to in technology, your source for trends about security, sustainability and technology

Jonathan Nguyen-Duy  00:11

I would tell and advise folks that before you start addressing the shiny object or that threat vector from the SVR the GRU, let’s make sure that things are properly patched, let’s make sure that our folks are properly trained.

Camille Morhardt  00:29

Hi, I’m Camille Morhardt, host of InTechnology podcast and today I’m going to talk with Jonathan Nguyen-Duy about what CISOs, which is chief information security officers, are thinking about and worrying about in 2024. Jonathan was a CISO of the Management Services Business within Verizon in his former career and he’s now working at Intel.  Welcome to the podcast Jonathan.

Jonathan Nguyen-Duy  00:53

Thank you, it’s really nice to join you.

Camille Morhardt  00:56

So Jonathan, you’ve been a CISO or served in a similar kind of role at a lot of different companies.  And I wanted to start by going back to when you were at Verizon and something you worked on there.  You told me that the team there compiled an annual breach report—which they still do.  But can you tell us more about that breach report?

Jonathan Nguyen-Duy  01:16

It’s a partnership of several 100 organizations, from the FBI, the Australian Federal Police, the Dutch National Police, the UK’s intelligence organizations, as well as lots of Fortune 5000s.

Camille Morhardt  01:30

And what does the report disclose? And who does the report go to? Is it public?

Jonathan Nguyen-Duy  01:35

Yeah, it’s publicly available. It was the largest single piece of content downloaded at Verizon is public information. And what it does is it distills thousands of data breaches every year, into patterns of behavior, threat vectors. So what are we seeing every year? What are the commonalities across nine common vectors? Where’s the single largest source of attacks? Where are we failing things of that nature, right vulnerability analysis, configuration management, security awareness training. So it details recommendations, but it also details of fundamental reason why we fail in cybersecurity is due to complexity.

So I can give you the three statistics and sources that demonstrate that we are not at proactive level and predictive, we’re somewhere between blind and reactive.

Camille Morhardt  02:23

Yeah, that’s funny.

Jonathan Nguyen-Duy  02:25

Yeah, most organizations think that because of machine learning and AI, that we’re now moving from being proactive to predictive, not only anticipating the problems, but predicting where the attacks will be. And I will tell you that no, the vast majority of organizations are somewhere between blind and just trying to react.

Camille Morhardt  02:43

And before you said, there were sort of like three things …

Jonathan Nguyen-Duy  02:46

Sure, let’s frame this. Okay. In 2023, Gartner will say we will spend as an industry somewhere north of $230 billion in cybersecurity. That’s the most we’ve ever spent in cybersecurity. This year, despite the hundreds of 1000s of open jobs, we will have more people employed in cybersecurity than ever before. With the advent of things like California Privacy Rights Act, the European Union’s GDPR, the NIST 2.0 framework, the SEC’s guidance, all types of regulations, right? So now we spend more, we’re more focused on numbers, we have more regulatory than ever before in cybersecurity. Pretty much agreed, right?

So if that’s the case, why are we failing, and though, here’s how we say we’re failing. 99% of all the vulnerabilities that were exploited, were known for at least a year. So that means we knew about the vulnerabilities, we had the signatures, we had the patches, we didn’t patch. So that’s a failure of asset management, configuration management.  Second one: 83% of all breaches result from the human elements, human error, that comes from my old shop at Fortiguard Labs. And so that tells you that despite the focus on security awareness training, we’re kind of failing there, too, right. And then from my old shop at Verizon, the last data breach investigations report noted that 43% of all data breaches in the clouds were caused by misconfigurations. So now we’re talking about configuration management, and lack of a proper network segmentations and flat networks; we are essentially talking about simple to intermediate controls. So what does that tell you? I haven’t talked about AI haven’t talked about—

Camille Morhardt  04:29

Right.  You can automate. And you can learn. Certainly, if you had a system that could automate and learn from errors and ensure that configurations were accurate and track assets better, you might not fix the human error, you could maybe monitor that, as well, but, and flag things.

Jonathan Nguyen-Duy  04:44

If you can do that, and then it all sounds really elegant, right? Just like zero trust, you just have to make sure that only legitimate users can access what they need to do their jobs, applying least privilege principles. The challenge comes about doing a speed and at scale, hundreds of thousands, millions of simultaneous sessions both internal and commercial, across a computing environment that spans your traditional enterprise data center perimeters into multiple hybrid cloud architectures with computing distributed from endpoints, the enterprise edge, OT devices, IoT devices, smart ecosystems, and then near, mid, and far clouds. And oh, by the way, the object of the exercise and everything we’re doing is to deliver better, more compelling, trusted digital experiences, which the analyst community defines as responsive computing. And then we need to do that at speed and at scale so we get better business and mission outcomes. That’s the object of the exercise, more responsive computing for better business and mission outcomes.

So the challenge about doing all that is that the speed at which we’ve accelerated now, the speed of businesses faster than 5G, so we’re talking sub five milliseconds. So in order to do really great responsive computing instantaneously at hyperscale, and hyperscale that’s the challenge. And so the reason why we fail on simple intermediate controls is that our networks are so distributed, they are so complex. You know, the Ponemon Institute suggests that the average enterprise business has some 60 to 68 different security products in their ecosystem. All right? And so we’re going through a macro trend of platform and vendor consolidation, as well, all in an attempt to get better visibility.

And so I always tell people that cybersecurity follows computing, it always has–so does networking. So the first DARPA projects, you know, we had users connected to a mainframe on a college campus. When we began connecting these colleges together, that became a Wide Area Network. And as we shifted to computing from being on-prem and centralize, and we went towards the forces of decentralized computing and multiple clouds, you went through a software defined wide area network. So across a continuum, there are two dynamics of centralized and decentralized computing. And wherever that compute is, it’s designed to give the best or optimized outcome of information to insights for the users. And so networking and security always follows that compute.

That seminal moment, as we approach the widespread adoption of AI is that most CISOs and CIOs now really operate with the idea that we cannot approach the challenge in silos, right? Because when the adversary certainly don’t approach it in silos; they work together and formal and informal teaming arrangements. The problems we encounter don’t just come to us only in networking, or only on the security side, or only on the computer cloud. And so the object of the exercise is to deliver a really good digital experience, then we have to ensure that the networking component and the security component are aligned with the cloud. So it doesn’t really matter about the elasticity or scalability of your cloud solution, if the users can’t securely and safely access that, right? So that’s why you begin seeing things like SASEs–Secured Access Service Edges, which combine secure SD Wan with a security stack delivered from the cloud to the enterprise edge. But all of that connecting any user on any device from any location to any application or workload hosted practically anywhere. And so that’s the plate that we’re trying to eat from and it’s a bit overwhelming for a lot of folks.

Camille Morhardt  08:38

Of all of the things that are reported in this breach report, what percent is that of the actual breaches that you think happened? Do you think you’ve got 80% and 99%?

Jonathan Nguyen-Duy  08:49

I would say this is all the ones that are publicly available that people have declared. So the nature of breach declarations is that not all companies will declare the breach. And this is during the last 16 years, there weren’t breach declaration laws, and compliance requirements like we have today. So beginning with GDPR. And beginning with the SEC guidance, now we have to declare a breach within 96 hours–roughly four days–from the time we determined that the breach had a material impact on the business, right?

So in the past, if you got a ransomware demand and you paid it, you didn’t have to tell anybody, if you got a DDOS extortion demand, you paid it. And you didn’t tell anybody. And so a lot of times the information that we would get at Verizon came from folks like Fortinet from CrowdStrike from FireEye, the folks who were investigating the data breaches, right? And sometimes it’s anonymized. So you don’t know the name of the company. But you know the industry, you know the size of the company. And then the data that comes in from folks like the Secret Service and the FBI, those things tend to be anonymized in terms of the actual name of the enterprise. But you begin to get a very good idea of what the adversary is doing, what they have been doing what they most likely will be doing. And it begins to allow us as practitioners to say, “All right, let’s say I’ve got $1 to spend, where am I going to spend my dollar? Am I going to focus to the left of detonation and focus on prevention and identification? Or am I going to focus to the right of the detonation on incident response and in recovery, restoration and how do I balance that right?”

Camille Morhardt  10:25

Okay, so I have many questions off of all that. But the last time that you did that breach report, tell me what year it was. And then how much was reported as having been paid out in ransomware, or similar kind of extortion?

Jonathan Nguyen-Duy  10:40

You know, is last time I did it was 2016. So the ransomware really wasn’t that big of an issue in 2016. But I think the studies in the marketplace now suggests that between, on the low end, about 40 to 43%, pay the ransom; on the high, end 75% pay the ransom. And the reason why there’s such a disparity between the two is that up to this point, there hasn’t been a requirement to declare that you’ve paid the ransom, because in many cases, it’s a double headed spear–because you pay the ransom, what you get back may not be a copy of the data with full confidentiality, integrity or availability, it may have been altered. In addition, that you may have been paying someone like the Iranian Revolutionary Guard Corps, which is designated as a terrorist organization by the Office of Foreign Assets Control. And then you might get a visit from the government that says you’ve actually conducted commerce with a designated terrorist organization. So you may be investigated for that. That happens, right? So it’s a very tricky world out there.

The case with MGM, which did not pay the ransom, and then Caesars Palace, which did pay the ransom, they paid ransom with someone reported, but $15-20 million, I think. And then the hundreds of millions of dollars of the losses at MGM.  Situations at Uber, situations at Solar Flare, any number of companies right now where the CISOs are struggling with regards to when did they make the declaration? In fact, I would argue now that because of the SEC guidance, and we can talk about that, it takes a larger organization than the CISO or the security team to actually decide, is this something we declare? It will involve the CFO because it’s the materiality, involves internal counsel, external counsel involves the CIO, the COO, it involves the CEO. So yeah, there’s some big changes happening both at a threat landscape environment, a compliance environment, a business requirements side. We’re about to hit a seminal moment in our industry, with the widespread adoption of AI.

Camille Morhardt  12:48

Yeah, how do we go from blind to prescriptive? It’s like that Gary Larson cartoon where the guy’s like, doing all the math, and then a miracle occurred step seven, and then it’s like, so we can’t just say “put AI on step seven” and say it solves it. So break it down.

Jonathan Nguyen-Duy  13:07

Yeah. So part of working on that Verizon data breach investigations report–across the 16 years at Verizon–I remember looking back at when I left, so “what’s really new this year?” and I realized a lot of it wasn’t new, and I began looking at the executive summaries of every threat report and breach report from across the industry over the last 20 years. If you look at the executive summaries, you’re gonna find something pretty interesting. What do they all focus on? Hmm, simple digi-media controls, a rigorous approach around configuration management, asset management application vulnerability management, a rigorous approach around multi factor authentication is zero trust. I would tell you that one of the demonstrations of effectiveness, of maturity of security programs, is your ability to address the known knowns, let alone the known unknowns. Well, there will always be known unknowns because on average, or bulk for every 25 lines of code. So one of the ways you demonstrate the efficacy of your program is can you at least address the known knowns? Right?

When you look at making sure you have properly patched updated devices, or you know what’s in your network, you know, those are things we need to focus on. I would tell and advise folks, before you started addressing the shiny object or that threat vector from the SVR, or the GRU, or the IRGC or whomever, the Lazarus folks for North Korea, you know, let’s make sure that things are properly patched, let’s make sure that our folks are properly trained. Let’s make sure we got the simple digital media controls, and we’re doing it right. And so that’s why I’m a big proponent that when you look at the data that suggests that 83% of all breaches involve the human element, right. So I’m a big fan of compensating controls; I’m a big fan of things that are always on and running, to make sure that even though I’ve got primary controls in place, there may be a failure at some point.

And so I would really look at zero trust bases around things like SASE and zero trust network access, because those types of solutions have built in these automated posture checks. So that for every request that doesn’t make so access an application on any network resource level event network, that that that’s going to be validated. That’s really Jonathan, that’s really Jonathan’s assigned device, that device is actually properly configured and patched, it’s got encryption running and sits behind a firewall. And then the role-based access controls are validated. So Jonathan has legitimate access to that email server, okay. And then for every session that’s logged and monitored, and that applies every time.  Those types of things go a long way. And then once we do that, we begin making sure that we’re balancing risk management, we’re going to balance compliance. And we’re going to make sure that we enable the business at the same time. So that’s how we take the data, the empirical data for what’s happening in the environment, to prioritize our investment. And from there we go, advances, more challenging issues.

Camille Morhardt  16:10

I wanna go back to a topic you mentioned earlier.  You said the average enterprise company uses up to 70 different security products.  Which is crazy, honestly.  But these solutions, they don’t all collect data and they don’t all talk to each other.  So, you were part of a group that tried to address this.  Can you talk about that?

Jonathan Nguyen-Duy  16:33

Somewhere around the mid 2015, Blanc–and I was I was on their advisory board–came out with this idea called “the data fabric” because the Blanc’s tagline was “listen to your data.”  And to make automated data driven decisions, right? So that’s what we were trying to do in security. And I said, “Well, how do you make an automated, accurate data driven decision if you’re not really listening to all the data? And if your security products are not really integrated to collect and share information, we’ll never get there.” And I went to Fortinet because they were pioneering something called a Fortinet security fabric, which really aligned with that idea that our security technologies should be integrated to collect and share information. But not just from one vendor, but to be able to integrate with the leading best in class vendors. Because as operators, you know, I think we should focus on operating the technology versus trying to integrate it. That was my challenge, always to the product folks who say, “hey, you know, make it easier for the operators to use not only your product, but others as well.” Because no one’s going to be a one vendor, one product, right?

But we began seeing that approach around platform consolidation. And I think that the larger macro trend across this arc, is this idea of big data. Because we are continually trying to harness vast amounts of data to distill that into very small insights around which we can build more compelling outcomes and experiences. Whether it’s hyperscale financial operations like arbitrage, whether it’s at an enterprise level about the optimal utilization of fuel, or whether it’s on a smart factory floor that says, “hey, we need to run this conveyor belt at three RPMs less, then we’ll save $5,000 a minute in fuel costs.” Or it’s on a smart farm with operational technologies, or whether it’s around things like electric vehicles, and Level 5 fully autonomous vehicles that are traveling down a connected smart highway ecosystem where you have proactive maintenance that “oh, that’s Jonathan’s vehicle coming down the road. It’s got 50,000 miles on it. It needs to be patched. It needs to have all types of things done to it. Let’s begin to send Jonathan alerts. Let’s begin to tell Jonathan ‘hey, you need to be aware of these things.’” So it works the same parallel ways of with your refrigerator, my refrigerator, you know, it knows that I’m running low on any number of things. It says “Hey, Jonathan, Sam Adams, Boston Lager is coming on sale” and there’s an opportunity to implement things like precision marketing, right?

The CIO from one Fortune 500 told me, and this is one of those eureka moments in my career, he said to me, “Look, the reason why this traditional brick and mortar retail business grew during the pandemic during a shutdown was that we are able to create new and better ways for our customers, partners, and employees to interact with our brand.” And that eureka moment, so, look, it’s great that I manage risk. It’s great that I manage compliance. But the fact that I can unlock value by creating new ways of interacting, what marketing people call adjacencies, and the ability to monetize what finance people call goodwill, and what marketing people call brand equity, is how you open new markets and create new value. And so he said to me, “Look, I came into this job with a remit of, say $100 million. But because of the digital transformation initiatives that we created, we created out of existing assets, we unlocked $20 million in new revenue.”

Camille Morhardt  20:08

You’re dancing on that boundary, though, of privacy, I think. Talk a little bit about privacy and how that factors in.

Jonathan Nguyen-Duy  20:17

So privacy is looming to be the second biggest issue of concern. So the first is cyberthreats, right? The second one is, according to most of the surveys, is going to be things around AI and AI ethics–not only the ability of implementing AI, but how do you demonstrate the way you collect, curate, store, sell, ultimately destroy data, and apply AI is done in an ethical fashion plus compliant. So one of the other macro trends is that we need to understand now that data is currency, data has value, because we use data to execute things like precision marketing.

By the way, there is a balance, as you said, because the consumer, each of us, wants more personalized and customized experience. So that when I go to city hall, I want the same level of wow, that when I go to Amazon; I want to go to city hall and interact with every facet of government at a time of my choosing, and a means of my choosing, and the currency of my choosing, I want to be able to notify city hall that there’s a pothole on my street.

Camille Morhardt  21:29

You’ll have me when you tell me I don’t have to go to the DMV anymore.

Jonathan Nguyen-Duy  21:33

So that’s one of the things I was working on is how do we reimagine not only the retail experience, but the government experience, right? And so we were working on things like “I would like to pay my taxes a year earlier, proactively pay them; what kind of discount what I get for that? What is the net present value of my taxes, right?”  We were gonna say, “Hey, you should allow me to pay my taxes or my fees, my licenses in any fiat currency that I want, or any cryptocurrency, and then I’ll pay a convenience charge.”


So what you’re beginning to see is that we are more interconnected than ever before, operating at faster speeds, with a huge demand for greater personalized and customized capabilities. In the midst of all this now, you had the earliest signs of privacy regulations coming forth from places like California and the European Union. And now oh, by the way, it’s not only California, it’s Nevada and at least seven other states in the US, that will all ask you as an enterprise to demonstrate, upon demand, how do you collect someone’s information? How do you store it? How do you curate it? How do you analyze it? How do you trade it? And how do you demonstrate that, upon demand, you’re gonna get rid of it? This is a monumental shift in thinking for operators and CISOs and Chief Privacy Officers. Because up to this point, we were concerned about protecting our intellectual property, right, our IP, our software, but now under their privacy regulation, we are not custodians of someone else’s intellectual property– their PII, their purchasing history, their browsing history–because that can all be monetized and people are brokering.

When you talk to the larger financial services division, I was speaking to one that was looking at the purchasing drivers and behaviors for people in northern New Jersey that were buying paint, going back to 1910. So they can model campaigns around how to sell paint and household goods. And so that data has huge value, and it’s being brokered every day. And so you have data brokers. And that’s why when you go to these websites, there is a pop up that says, “hey, here’s our privacy policy. Do you accept the cookies? Do you mind being tracked?” Right? Yeah. So one of the things we joked earlier is that we all live a life in the open, because we’re all watched, right? So we’re getting into new areas. And I think on the consumer side, people are also beginning to realize that data has value and it has provenance. So if you’re going to export data from the European Union, some cases, you may have to pay for that, because that data is certified to be EU data.

Camille Morhardt  24:09

Well, who’s getting paid in that case? That would be the government?

Jonathan Nguyen-Duy  24:12

Not necessarily. It could be the data brokers, as well. It could be the marketing companies. So one of the major institutions I think it’s a North American Marketing institute says that on average, it costs $30 to $32 to find URI, meaning not only our PII, but our socio-economic background, our purchasing, proclivities.  In fact, the AI platforms are so good right now that if I know the zip code in which you live, I would have at what the statistician says a 99.97 level of confidence interval, right? And I could generate a profile of you that says “yeah, you do all the following things.” So that data has value. And now, individual consumers are realizing that they’re part of an ecosystem that conducts commerce in data. So if you go to a browser called Brave, you can opt to have it anonymized or you can opt to have it recorded, and then get paid in attention tokens.

Camille Morhardt  25:11

Are we gonna have backlash? I mean, here’s the thing. I feel like people who were born and experienced a decent portion of their lives pre-internet, you know, retain a bit of, like, paranoia. “No, I don’t want any of that stuff. And like, by the way, I don’t consent to my face being recognized.” And then you have, I think, of course, 80/20 rule, but then you have people who just grew up immersed in the internet grew up within even filter bubbles, and they already know the advertising is targeted, and they already know everything’s being tracked, and they don’t care, doesn’t even bother them. So I don’t know, are we just headed completely in that new direction? Or do you think there might be a swing back with–

Jonathan Nguyen-Duy  25:50

Yeah, there might be, but that’s the basis of why the CISO and the privacy officers and their teams exist, because they do have to have the ability if someone opts out to say, “here’s how we demonstrate that we’re no longer collecting your information. Here’s how we demonstrate that we are no longer broken and nothing is stored.”

Camille Morhardt  26:08

So what about looking beyond the individual person or even the company level? With everything being so distributed, so interconnected? And also consolidating, as you pointed out, to a degree? Where does that put critical infrastructure in the attack space? And how does industry and government look at that?

Jonathan Nguyen-Duy  26:33

So critical national infrastructure, it reflects another trend, which is the convergence of cyber and physical IT and OT.  Our systems are more interconnected than ever before. What we thought were air gapped environments, in most cases aren’t really air gapped. If you operate with the principle of being breached, right, so that’s another paradox in our industry, we started off beginning with the idea that we can not only detect or prevent data breaches, but now we start off with–

Camille Morhardt  27:00

You’ve already been breached. Now what?

Jonathan Nguyen-Duy  27:03

If you assume breach, you also have to assume compromise. And so within the critical national infrastructure, most of CNI is privately owned—not public at all. So I think the US government and the global community have done a great job in terms of more best practices. And you hit the nail on the head earlier, you said most of these standards tend to be descriptive and not prescriptive. And so prescriptive is what CISOs do, and chief privacy officers and compliance officers do. They go about prescribing the solutions to meet the descriptions that have been outlined. But I think this is why zero trust is so foundational to any IT strategy–this idea that you really can ensure that only legitimate access is being granted at all time, across the OT/IT space, and across those critical national infrastructure companies, right.

And the only way you get to that point where you have what we’re all searching for, which is a high fidelity, single source of truth. You know, that single pane of glass, listen to your data, collect all the data. This is why I think we’re at a seminal moment in our industry, where we begin to realize that the level of complexity, the sheer volume of data will never be addressed by non-integrated platforms or by people and that we need AI so that we can get to a point where we can be proactive and predictive.

Oh, by the way, the stats I said earlier, from both AT&T and Verizon, they suggest that somewhere between one half, and 70% of all data breaches are discovered by third parties, by law enforcement, by the FBI, by researchers, by victims notifying their credit card companies. What does that really tell you? It tells you that, in the vast majority of cases, the internal security team–which is spending more than ever before has more people than ever before, has more focus than ever before–is still largely somewhere between blind and reactive.

Camille Morhardt  29:00

Well, you also said we’re all searching for a single source of truth. Is there even one anymore? I mean, is that sort of barking up the wrong tree?

Jonathan Nguyen-Duy  29:08

No, I think that’s what we’re trying to do right now, as we converge platforms. We’re moving away from owning and operating our own networks, our own infrastructure, let alone writing our own software; most enterprises today will have a hybrid of that. And so what we’re getting to now is converging vendors to platforms, and then we’re building the automation around those platforms, so that you can feed the data into one analytics platform.  You want to see network data, you want to see the performance of the network–so you can not only identify, but proactively notice log jams or mechanical issues or routing issues that you can fix; you want the same thing for security, you wanna do the same thing for your cloud. And the fourth element you’re not going to manage it for is your digital experience. You want to be able to see how does network performance, security performance, and computing performance affect customer or user experiences? And how does that translate into things like conversions and revenue for your B2B and B2C transactions? Right?

So yeah, we’re all trying to get to that, and Splunk sort of began that trend around data lakes. But you’ll see that as we converge these platforms together.

Camille Morhardt  30:24

So the mess of applications also that are out there right now and the mess of platforms and SaaSs, is AI, and particularly generative AI, going to help us get to a point where we actually just abstract away the apps and the platforms and just kind of express what we’re interested in finding or having done and go through that magic AI box and out the other end comes back the reports?  Instead of integrating all of these different applications, and sort of plugging in API’s here and there?

Jonathan Nguyen-Duy  30:56

I think we’ll still do that. I think where AI is going to be played is going to be in the Dev SEC Ops stage. And so when I think of security, I think security is a continuum that begins with software creation and development through its implementation through its lifecycle and withdrawal. I don’t think of security as being upstream or downstream, I don’t think of security as being to the left of detonation to the right of detonation. And so AI has a role to play there. But that’s why you’re seeing so much emphasis on things like cloud security protection platforms, or posture management platforms. So you’re seeing AI in Sec DevOps, where it is scouring the code and software in flight as it’s being developed. And that’s where AI comes in really handy. The ability to use those billions of notes to say, “how is this being written?  How is this being developed?”  This suggests that hundreds of lines of code later there’s going to be a bug. You begin to streamline and automate those things. But likewise, be mindful that the adversary is using the same thing. And there’s a rule of thumb that says anything that’s commercially available, has been purchased, acquired, and deconstructed by the adversary, right. So you’re gonna see AI used in the same way. But the seminal moment moving forward is that we’re gonna see a much better QA process around software development, but software development is going to be more distributed than ever before. So you’re gonna see that many, many places inside of organizations.

Camille Morhardt  32:22

Very interesting. Jonathan Nguyen-Duy, thank you so much for joining.  Really interesting background, you have as a former CISO roles both in Verizon and Fortinet and you are of course now in a similar kind of a Field CISO role at Intel. So thanks for your time.

Jonathan Nguyen-Duy  32:39

Thank you.

The views and opinions expressed are those of the guests and author and do not necessarily reflect the official policy or position of Intel Corporation.

More From