Skip to main content

Greenville Business Magazine

A Deep Dive Into Cybersecurity

Jul 02, 2024 09:35AM ● By Donna Isbell Walker

(From left, Brian Daughhetee, Jamie Barbery, Melissa Davis, Thomas Scott. Photo by Amy Randall Photography)

Online security has become more and more of a concern in recent years, as hackers have become more sophisticated and as artificial intelligence continues to evolve.

Integrated Media Publishing hosted a roundtable discussion with four leaders in the world of IT and online security on May 20, 2024.

Here are excerpts from that conversation, edited for brevity and clarity.

The panelists were:

Thomas Scott, executive director of CyberSC

Melissa Davis, instructor at USC Upstate

Jamie Barbery, CISO at United Community

Brian Daughhetee, president and CEO of ANC Group

Integrated Media Publishing Editor David Dykes moderated the discussion.

Question: Let's start off this morning talking about what was recently in the Wall Street Journal, that cyber security threats have increased for businesses over the last year, according to a survey of compliance professionals. The newspaper said almost all mid-size companies, those with between $50 million and $1 billion in revenue, said they felt cyber threats had increased. And nearly half of the compliance survey respondents said they only had a basic or novice level of expertise in overseeing cyber security-related compliance, and only 8 percent consider themselves experts. And the South Carolina Department of Commerce, citing IBM data, has said that only 40 percent of companies falling victim to a major data breach would survive more than six months. My question to each of you is, what are the most pressing cyber security concerns facing businesses and other organizations? Brian, we'll start with you.

Brian Daughhetee: We obviously see a range of things, and for a lot of people, I think the go-to instinct is to throw more technology at it. But the common weak link over and over again is the people and the culture. We can put things in place when we find a breach, and fix it. Usually, we can analyze the breach, determine what happened, and close whatever hole was there. The thing that keeps popping up consistently, though, is training staff, getting them in a security-minded culture and security mindset. That's probably the biggest lift because it's ongoing. You're constantly getting new staff, new employees.

Q. Melissa, you were nodding.

Melissa Davis: I agree. In study after study and in my courses, consistently, I talk about the biggest risk of any organization is a human risk, the error that can occur with just not understanding the security risks that are associated with the job, how to use the technology, what is expected of them, and then the accountability part of it. But taking it at the organizational level, I think we have spent so much time being reactive instead of proactive. And that's understanding how our data needs to be used and how we need to be protected, but what risks are associated with that because we can't close everything down. 

We have to allow access or there's no point in having the data. But I think you have to step back and look at the holistic approach of what needs to be protected, how can we protect it. But what risks are still out there that we need to be mindful of that could possibly help with technology, but training employees and that being a continual process, not just at orientation when they come in … I think is critical.

Q. Jamie, You deal with money, and money is usually involved in these cybersecurity threats. What are your thoughts?

Jamie Barbery: I think for smaller or mid-sized companies … they’re mostly operational-focused and so not security-focused. I think what the others have alluded to that it is a culture of security that has to be built within every employee. Having a good cybersecurity training program, if you don't have in-house specialty, like you said, only 8 percent get that third-party expertise as a partner and then start working to the highest risk and move your way back to the internal piece of the network. 

Have a good partner if you're not able to get that in-house expertise, have a security-focused culture, and then let that partner help you through the process. So, cybersecurity insurance goes a long way in keeping you above water after a breach. And so those are all things to consider, to change that culture mindset of security.

Thomas Scott: I think amongst the biggest pressing concerns, David, are how do we protect our data, whether that's intellectual property or client data. Protecting that information and that data from cyber criminals in nation-states is, I think, the No. 1 concern. It all is part of, as we've heard from the other speakers, part of risk management. 

Many organizations look at financial management and managing the risk of finances and reputational risk, but they don't really look at or understand cyber risk. And then the last thing everyone's already alluded to is the employees. We give everyone a user name and a password to our account. That's like the keys. And then we trust that they're going to do the right things. If we don't continually train them, we'll find out that they don't have that knowledge. And again, the way I've said it is, my job is to keep the bad guys out and to keep the good guys from letting the bad guys in.

Q. Each of you has talked about preparation, culture, employee training. That's anticipating a hack or a cybersecurity event. How should you reflect on one when it happens

Daughhetee: Certainly, I think every organization needs some level of an incident response plan. The last thing you want to be doing is trying to figure out what you're going to do once the building is on fire. We had fire drills as children in school. You want to have a similar program in your own business where you've got a cyber readiness drill, and it needs to be ongoing and looked at every year and updated. I know for us … we're a technology company, so we're really good at the tech.

We actually brought a third party in to assess our readiness for a cyber incident, and we scored 100 on the tech, but it was the policies and some of the human soft skills and things, which as humans, we want to help people. You look at some of the breaches that have happened, the casino breach that happened last fall. That was a help-desk person trying to help somebody do their job, they thought. So, you've got to just rein in your helpful instincts sometimes as a tech, and that goes against human nature. So, over and over (you should prioritize) training and readiness so that you say, OK, let me just stop for a second. How do I verify that this is really one of our employees asking for a password change?

Q. That was an open-ended question. Anybody else like to jump in?

Davis: As an instructor, I teach my students that it's no longer about if the attack is going to happen, it's when. So, what is the response at that point? Has the data been backed up? How have you prepared? Have you thought about all of the what-ifs? What if this happens? What if they get to this data? What if they get to this part of the network? You take that and you go back, and you bring that third party in to help you analyze what risk you still have to be aware of and leave open, but where you can do better. 

Was it the people? Was it someone calling in, trying to pretend to be a help desk or needing their username and password? Or was it a legitimate program or application that had to have that back door through the firewall to come in that you now need to look at totally revamping how that application is handled or a new application brought in?

Barbery: I'd say it's 90 percent preparation of what's going to happen ahead of time. Like the others have mentioned, a solid response plan should be, the first time you see these scenarios shouldn’t be the time when you have a breach. So, make sure you get ahead of that and, like I said previously, having cybersecurity insurance, they will give you a cyber coach. You're part of a forensic team. Things that you have in place, your counsel that you would need to try to understand what do we need to communicate if it's a public firm versus a private firm.

How do we keep privilege in line from a lawyer perspective? How do we protect the company as this goes out? Especially a public company, you really need to be prepared for those types of situations and what's going to cause you to send a disclosure of the SEC, other types of things. So be very prepared ahead of time to combat these things because it is an inevitability. It is more if than when. So be prepared on the back-end.

Scott: I'll chime in that as we've talked about having a good plan in place, practicing that plan and exercising it. So, it's not the first time you've seen this scenario. We want to act with alacrity. We want to act quickly because we know that the quicker that we respond, the better chance that we reduce the impact of that cyber-attack and that cyber disaster. Other thing I'll throw out is that any organization should take a look at what their capabilities are to be able to respond to that fire or to that incident. And if they don't have the internal capabilities, they should look to some level of a forensics partner or some level of an incident response partner that can come in and has the technical chops and capabilities of helping them understand what truly happened.

Q. You've mentioned insurance. Is there a feeling that ransom should or should not be paid from what you studied and the experiences you've had indirectly?

Scott: Yes and no. So, the FBI and law enforcement will tell you, don't pay the ransom. My CEO is going to say, what's it going to take to get back up and operational? And if we're making $3 million dollars a day and they want a $5 million dollar ransom, we pay that $5 million dollar ransom, and we're back up and operational in two days. Problem is that you're dealing with criminals. Are they going to be truthful and honest in your exchange? 

Equally, if whoever is attacking you or holding you for ransom is known as a terrorist or on the government's terrorist (list), you can't pay the ransom legally because now you become a criminal for paying the ransom by funding terrorists. It's a double-edged sword in many ways.

Daughhetee: It's very sticky. It really is. And there's a whole other side, too, is what do you do when you walk in and you see 15, 50, 100 employees that if you don't pay the ransom, you could be out of business. These could be people and families that aren't going to have a paycheck coming in that week. There's a very human element to this, and the attackers know that. We've seen a decrease in a lot of the attacks on the really big businesses, the Targets, the Home Depots and stuff. And they're going after the smaller businesses because they can leverage them, that emotional factor as well.

Barbery: I don't think it's an easy yes or no question, should you pay the ransom or not? I think it depends on lots of factors like you mentioned. It's really a financial decision based upon how long it takes you to get back to normal operations. If your backups were accessed, and just a lot of different scenarios. Could you get back on your feet in a short period of time versus paying the ransom? And most of the time, from consultants I’ve spoken with, although they are criminals, most of them have to propagate their business and to continue doing so. If you pay the ransom and they don't come through, then no one's going to start paying the ransom.

So, for the most part, they come through with what they say they will do. So, I think that in the best-case scenario, you wouldn't pay the ransom and it would disincentivize these people. But unfortunately, just because the fiscal responsibility of the company to stay afloat and to make money is going to require that payment to be made. I keep harping on cybersecurity insurance, but a lot of times there's cyber coaches there that negotiate specifically with these threat actors. They have relationships with them. The first offer is not the last offer. 

So, a lot of things work behind the scenes to reduce those costs. Unfortunately, that is a specialty and a line of business that exists today, but that's par for the course. That's why it's important, I think, to partner with the right people if you do have an incident who are wise in the ways of how these things work in the banking industry.

Q. Melissa, is there a textbook answer?

Davis: There's not a textbook answer, and I don't think I agree that there's no yes or no answer. I lean more on the side of, if you can get away with not paying the ransom, you don't, because I think from studies that I've read, it opens you up for further hacks and further breaches because you did pay the ransom. Now you're an easy target. They know you're going to have that emotional connection to your people, and you're willing to pay that money. And we're also dealing with criminals. So, they can give you that key to decrypt your data. But are you getting everything back? Are you actually being able to resume business operations?

Daughhetee: And is the integrity there? Has the data been modified? That's the thing because the size of the data that's often compromised, they could change files, and it could take you months or years to find out, and that’s a huge problem.

Barbery: I think in addition to just encrypting, did they exfiltrate any of your data? Because that's the threat, extortion. So, one piece is getting your operations back up. The next piece is extortion and reputational risk. Those are the two main pieces. You would want them to prove that they do have data. They did exfiltrate data so that you understand what data was exfiltrated. And a lot of times, they'll send you a copy of the exfiltrated data so they can hold it over your head. So, what factors other than just operational? There's a lot of reputational risk and financial risk from a publicly traded company, stock risk, things like that.

Q. Brian, you mentioned that larger companies are not being as affected maybe as much as in the past. But the Wall Street Journal, again, says that because large companies are better prepared to repel cyberattacks, hackers have shifted their focus to vendors. So now much smaller companies find themselves in a vulnerable position with limited cyber defense resources and expertise. That could threaten thousands of organizations. How much farther down the food chain is this going to go?

Daughhetee: I think a big driver will be, like you said, you've got financial assets that people could go after. Ultimately, the hackers want to get paid, and their levers that they can push are, well, we've got your data. The reputation is huge. A lot of times we'll deal with clients who go, I'm a small law firm. We just have three partners. We've maybe got 100 clients or whatever, and why would a hacker care about what we do? And it's like, well, you've got 100 clients who may not want their legal case or legal information out there in the world. And so all of our customers that we talk to, we come back to preparedness because you had mentioned earlier that if you've got good backups and the backups are solid, you don't need to pay the ransom. That's one piece of it. But if they exfiltrated data, they still have that lever to push. They have that reputational lever to push. Even if they haven't brought your servers down. 

So, we look at 24/7 monitoring of the network to look for behavior that's just out of the ordinary. All of a sudden, the custodian's PC fires up and starts sending large amounts of data up to the internet. You don't want to come in on Monday and find out that that's happened. You want to see it when it begins, and you want to shut it down immediately. … As you said, the larger companies, they can afford all those tools, and the staff of people it takes to run them. Most small to midsize companies can't afford that. And that's where we come in. We do that for them. We provide that 24/7 watchdog on the things that are going on in their business so that they can go home and sleep at night and have a good weekend with the kids and have a normal life.

Scott: But to your point and to Brian's point, he now becomes a target. Because he is now holding the keys to all of his clients and customers. … In all reality, it doesn't make a difference how good your defenses are. If somebody wants to get past them, they're going to find a way.

Daughhetee: And I think the key thing there is vigilance, also compliance. In our industry, you'll never stop everything. But if you can say, look, I am doing what the industry says is best practices today, then at least you can say, Look, I did everything. If someone wants to get into Fort Knox bad enough, they can do it. And unfortunately, in our world, it's way easier to go off with terabytes of data than it is millions of pounds of gold bars.

Barbery: I think more than any time in my career, third-party risk is the highest it's ever been. As our data in other places moves to the cloud, moves out of on-premises data centers, that type of thing, the risk just increases significantly. So, all your third-party vendors, fourth-party vendors, usually those vendors get smaller as you go down the line. (You should) have a good vendor management program and understand the risk … (and) understanding where your sensitive data lies with all these third parties and which other third parties they share those with.

And so that's always a very large concern. And you mentioned the low-hanging fruit as far as a company such as yours that holds all these other companies' data. So very high risk there. Same thing with Microsoft. Everybody has 365. And so, it is worth the effort for people to try to break into 365 because there's so much information there. And recently, there was a 365 got hacked by nation-state actors. So, it was a really complex organization, but they were able to get State Department information, lots of different information from 365. 

And so, the more customer information you have with one vendor, the more risk is around that vendor. And therefore, the due diligence around third-party risk with those vendors should be very heightened. Just because the data is leaving the internal servers and internal data centers going out to cloud services and third-party providers more so every day. And so that just heightens the third-party risk. … What are those companies and their affiliates doing with your customer data and how do they keep it safe?

Davis: And that comes to service level agreements, your SLA agreements between the organization, the cloud service provider that you're dealing with, especially for smaller companies. And I think there is a lack of understanding of how Azure and multi-tenancy servers will work, where it's not just you in the cloud with all of your data secure. You may have shared resources with other companies on those Azure servers or that AWS environment, and understanding who has access to that data. That all has to be considered when you're doing these agreements with the vendors, with any vendor, and non-disclosure agreements, and who actually is accountable when there is a breach. So, if you're doing Microsoft Office or Office 365 as a service for your organization and they are breached and data is stolen from your company, who is liable for that at that point? Is it Microsoft or is it you as that small company using their service?

Daughhetee: We view our job with our clients as in large part to be an educator. Sometimes the education is, we don't do this for you currently. It's like, you're our IT company; you do all of it. It's like, well, we're not your compliance officer. We can be. It is a service that we offer. But we're not your chief security officer. We can be. And so, we've had to get much more granular in the last 25 years, where before I would be like, I'll take care of that for you. And now, like that risk you were talking about, if we tell somebody what to do, you have to install this system and you'll be safe, we now own their risk. And that's just a bad position because then they have no skin in the game to try to fix the risk themselves. And we're liable.

Q. Let me throw this question out in general. The impact of work-from-anywhere on the security of employee devices conducting to a corporate network. You might remember years ago, when Sony got hacked. And Sony said, we're going back to written correspondence internally. We're going to keep more written paper files. Are we at that point now where work-from-home opens up a whole new playing field for the hackers? And do we need to go back?

Daughhetee: We do not need to go back. No, it'll never happen. Short of the technology just collapsing for other reasons. … If Covid had happened five years earlier, I don’t think as a country, we would have weathered it nearly as well. The broadband infrastructure wasn't in place. … We do a lot with education, and kids had to do distance learning. The mechanisms just weren't there and in place. Whereas when it happened, there were already districts doing at-home learning and things. But one thing that Covid did do is accelerate the security risks. We all saw the things of Zoom meetings getting hacked and things. And so, you have technologies that came along like SASE and SOC and zero-trust networking. Those are all birth children of the hackers upping their game and seeing the vulnerability.

So, if I'm an executive and I got my laptop and all of a sudden I'm working from home, that's fine, that's great. But my kid might have his laptop and God knows what sites he goes to, well, now we're at home and we're on the same network. So, this whole concept of zero-trust networking is like, (my kid and I) might be sitting on the same couch together, but my computer absolutely does not trust anything that's coming from your computer. I'm just walled off, and so should they be. I think the technologies can tackle that problem. I don't think we have to run back to pen and paper.

Davis: I would just take it from a parental standpoint, living in a rural community from South Carolina, that we were not prepared for distance learning. As a parent and as an educator, there was a tremendous hurdle to overcome with the distance education. I will say Covid now has prompted more high-speed broadband in our rural community, which we are very grateful for. I do see there's a big issue with working from home with BYOD in place when it's to bring your own device or using your own device.

… Is there some type of management in place for that remote connectivity that would create some type of VPN (virtual private network) between them to isolate them to do that zero-trust back into the organization? Has it been updated? But what type of device are they using? Has that device been updated with virus protection? What type of risks are associated just with the type of devices that they're using? It's hard to control with BYOD, and it's hard to control remote access when the company is not in control of the device. 

Q. Jamie, you have a lot of employees who work remotely dealing with sensitive information.

Barbery: I think the technology is there to meet the requirements necessary to keep it safe. I don't think we're going backwards, but Covid definitely sped up the need for that to be more seamless. The normal VPN, the normal work-from-home scenario that you had before with just one VPN. That technology has morphed (and is now) much, much better to handle these different risks. There are BYOD risks, but there is technology to handle most of the scenarios that existed. But it's very important to understand the risk because BYOD architecture is going to be a lot different than business-owned device architecture. But I do think the technology is there. And if the company can afford it to do it securely, and that's another driver.

Scott: I'll just offer that traditionally, the perimeter used to be the firewall, and now that perimeter is in everybody's home. That truly changes that landscape and that paradigm. One of the things that I will cite as a success story is right after Covid hit, Dominion Energy was in the process of building a new building. I'm going to throw out some bad numbers, $6 million to house 500 people. Well, now everyone's working remotely. 

They don't need a building to house 500 people. To Dominion's credit, they shifted, they repurposed and created a smaller building, and they transitioned and used all the extra money that they had … to be able to scale up their remote infrastructure, knowing that they were going to have to build out a much stronger infrastructure to address those remote employees. So, kudos to them for recognizing that the need for a 500-person office was no longer there, and to be able to then shift those funds to make sure that people can connect securely.

Q. I think each of you has touched on the worker element of all this and the companies and the needs. In South Carolina alone, there are more than 2,000 unfilled job openings, while on a national level, 300,000 more openings in terms of trained cyber security professionals. Given the high cost of a data breach or a ransomware attack, organizations are more than willing to invest in the people they need to strengthen their defenses. But doesn't that mean that these organizations need to be more proactive in finding, developing, and retaining the cyber security workforce? And how can the colleges and how can the people who are doing the training enhance that situation?

Daughhetee: Well, certainly from our end, as a provider of these services, it's hard to find good employees right now. And so, when we find a good employee, we really try to pay to keep them around. And pay is the wrong word. Compensate is a better word because at a certain point, money does not motivate all your employees. Some of it, you get into real quality-of-life issues. … The training and the courses for the colleges and everything, we see a lot of people coming out, for example, like, Oh, I'm going to be a coder. I'm going to be a programmer. And I was just at a cybersecurity workshop last week, and they were talking about the fact that for 20-plus years, you could come out and an entry-level coder would start at around 80, $90,000. Now they're starting at 45. …Ten, 20 years ago, people weren't that worried about cybersecurity. So that's certainly a growing area. And I think the colleges have a huge role to play there.

Davis: We started our cyber security program … the year of Covid, but we have grown to about 65 students right now. We try our best to work with companies in the area because we want our students to stay local to South Carolina as much as we possibly can because there is a tremendous need here. But we work with those companies to see what their needs are. … I try my best to focus on hands-on experiences, as much as much knowledge on reality, about the breaches, how to react to those breaches, and talking with local companies as to what to expect. … They're expecting that money, but we have to, at the college level, make them prepared. We have two things at USC Upstate that we've developed.

We've got the cyber security lab, it's a completely isolated lab that our students, when they're in, we can do penetration tests and perform simulated attacks in that environment. But we also have the National Security and Government Institute that was developed by USC Upstate last year, where we're reaching out to military partners such as Palmetto Tech Bridge, which is with the Naval Information Warfare Center, and they're based out of Charleston.

To get our students acclimated and start in discussions with these people as to where they can get internships, where they can get that hands-on experience before they ever graduate, and they can see what I like to call their niche. Because every student that comes into the classroom with me, whether it's a networking student, a programming student, I tell them, you're going to find that niche. And if you're in cybersecurity, your niche is going to be that because you're going to have to know that programming, you're going to have to know that networking, you're going to have to understand that infrastructure. 

Most importantly, you're going to have to understand how to relay that information to non-technology people. You've got to be able to take those risks, understand those risks, and then be able to present it to your chief information officer or your chief financial officer. The financial officer may have no idea of what you're talking about, but you've got to make him understand the risks that are associated with that environment.

The AI piece of that is really impacting cybersecurity. … Now we've got to understand what's real data and what's fake data, what's been altered and what's the integrity that’s still intact in that data because of AI. AI brings a new level of in-depth understanding that we have to tackle at the college level. Another thing I would like to do at USC Upstate is a cyber range. Basically, it would be a connection. It would give our students experience in handling, understanding risk, looking at a risk. A small company could come in to us and say, We can't afford a cyber security expert to come in and tell us a risk, or look at our network infrastructure, and (we would) set up an agreement with those companies where our students, with support and with guidance from us and our IT department, can help them walk through the process and understand the risk associated with their company, their users, their data, and give a foundation for both sides. So, it would be a free service to the companies, but it would also provide some of that experience that students are going to need.

Q. Jamie, when you're hiring people, do you look at how many coding classes they've taken, or what do you look for?

Barbery: I look for eagerness. We can usually be patient enough to let them see all different areas of cybersecurity. So, we even have governance risk compliance under cybersecurity. So, there's a lot of different things maybe they're not aware of. And so, we really want to find what they're passionate about and what they're eager about. And usually, people that are passionate and like what they do, excel at what they do. And so, we've had success taking interns from different colleges and technical colleges, and it's really been great. … They come in knowing a lot more and are very well-prepared. Luckily, we've been able to have a few interns every year, and then been able to hire them full-time. … We want to give people, if you're eager and willing to learn, then we can put you in a place that you'll thrive.

I think that's been the most important. But I have been impressed with the younger generation as far as coming out of schools and the preparedness. It's been great. Like I said, I think it's more difficult to hire some of the senior positions. We've had just a lot of competition. And so, to get really good people, it's really important to keep the good people like you mentioned. It's very competitive.

Q. Tom, what have you found?

Scott: In cyber security, we have what's called a negative unemployment rate, meaning there are more jobs than there are people to fill them. And that's part of a workforce development gap that we know globally, nationally, and it impacts us even locally. From CyberSC's perspective, we've taken a look at partnering with Apprenticeship Carolina, looking at apprenticeships as a different way of trying to address that workforce gap. We just held a webinar a couple of weeks ago. We are working with some of the historically Black colleges and universities like South Carolina State to help build out their cyber programs.

I'm working this summer with a Department of Energy grant to historically Black colleges and the Savannah River National Laboratory, looking at how do we make sure that we've got the workforce needs addressed. And then we also stood up a Women in Cyber Security chapter here in South Carolina, affiliated with the Global Women in Cyber Security Group. And we're also working to promote a job board within the state of South Carolina, trying to link up industry and employers with those workforce, those seekers for those jobs that we've talked about.

Q. Brian, let me get back to you and just ask you, did you mention that the salaries at one time were $90,000 and now they're down to $45,000?

Daughhetee: Well, for an entry-level programmer position, yeah, they were just in such high demand. And now I think it's in that 45 range. And a big part of that is because you can even go to ChatGPT and say, Hey, write me a code in this language. It may not be elegant, but it'll work. And then they can start tweaking it. So, a lot of times, that's half the ball game. I use ChatGPT for, OK, I know that I've got to give this presentation on this. And so, I'll do the bullet points. And it's like, OK, give me some text and I'll use it. The last thing you want to do is have it write your whole thing for you. But it's a great kick starter. It gets you down the road. 

A lot of times I'll go, OK, you missed this piece, or sometimes this piece is just flat out wrong. And so, it's really important that you read it. But certainly, it's a tool. And that's my thing with technology is … it is a tool. And it ultimately comes down to whose hand is that tool in, whether it's a good or bad thing, because that's really the driver.

Q. Let’s talk about artificial intelligence. It's changed the game for just about every type of business, including ours. Under what conditions should it flourish?

Daughhetee: At the conference I was at last week, they had someone come and said, Oh, we've got a video from Lindsay, and everybody knows Lindsay from this company because she does all the videos for the security company. And so, they pop it up on the screen and we watch this three-minute presentation by Lindsay, and Lindsay's just sitting there. She's in the audience. We're all looking at her, and she's just sitting there with this big grin on her face. And when they're done, they're like, Yep, that was 100 percent AI-generated. 

What they did is, there's AI tools out there where they fed all of Lindsay's prior marketing pieces in there. And they said, OK, I want you to create a video on these topics. And I want her to say these things and cover these points and put in some cool graphics and maybe even a little fake news logo in the background. And you're totally sucked in. And it was a cautionary tale, right? Because, again, it's powerful. That company used it for its purposes to communicate how powerful AI is.

Q. And it’s easy.

Daughhetee: And it is easy. And it is easy at the same time Lindsay might have been going, Am I going to have a job next month? I think ultimately what will have to happen is some sort of tag because the fakes are getting so good. There has to be some embedded tag that if you're using an AI tool, there's something embedded in the metadata that the techs can look at and go, This is a fake. Much like copyright protection on music.

Davis: I think we have to learn how to use it wisely, but it is a tool, and it's going to be used against us. Absolutely. I think the metadata tagging is an important part that if we're talking about regulations or how the AI softwares are going to be used, that needs to be an important component of it. But on the education standpoint, students can use it all day long to try and write that paper and won't even read what it has generated. It's really obvious right now, but just like anything else, it's going to get better and better and better. I have students now that will take the PowerPoints that I give them in class, and they'll throw that PowerPoint into an AI generator, they may write a little bit of scripting code on their own to bring out some of that information they want to highlight, and they'll create tests, mock tests out of it. 

They'll use it to help write papers, and they're using it as a tool. So, when they're using it to help them study, I'm good with it. But if they're using it to create a report or a presentation that they're supposed to research, then that's how when we on the education side have to be able to differentiate what is an original versus what is AI-generated.

I think there are really good opportunities for using AI intelligently, and especially when it comes to firewalls and virus protection on the programming side. But we're going to have that integrity issue of understanding, is that video legitimately that person, that we're going to have to deal with. … I still think there's a lip sync effect on a lot of the videos where the mouth is off just a little bit. It's little tweaks here and there that you can still visually see when it's AI-generated, but that's going to go away eventually. I'm not sure if there's a tool out there or some type of firewall algorithm that can help people look if that data was generated by AI or ChatGPT did this. 

Daughhetee: We had someone do a presentation for us, and they told us upfront, Hey, we had AI generate this for us, and we're watching it. I think about it like the T-Rex, the person who was talking like they would just flip their hands up and down, the arms never moved. So those are the tells. But then they said that was the free version of the software. If I pay the 99 bucks a month subscription or whatever, no, you can't tell. And so, it is. It's getting better and better. And we have to be vigilant. We don't want to just trust everything that someone tells us. If it doesn't sound quite right, dig into it, look into other sources, verify it that way.

Davis: That’s exactly what I say. Don’t use just one source of information.

Barbery: Being a financial institution, my primary concern is having our private sense of customer information, understanding where it goes. So, for instance, a lot of the public models, ChatGPT and so forth, a lot of the consumer-based products like those, how can we keep our customer information from being used to update the model further? That's the biggest concern for a lot of us in the financial industry. We have to pick a winner, pick approved use cases for these things, or build models in a private setting in Azure or some other form. We've tried our best to only have approved ways you can use AI within the institution. That is changing all the time because it is built into almost every product we have. For the good and for bad, our priority is making sure our customer information is secure.

We're very methodical about the use cases and policies around it use. In addition to that, I would say the video and deep fakes, it has caused us completely to eliminate voice as an authentication methodology. That has completely gone. … We've eliminated a lot of factors of authentication that we've had previously. Then the human element, so social engineering, the phishing emails, those types of things. They can pull all the information off the public information, LinkedIn, other things, and build very, very, very realistic emails and other social engineering. So, it's very easy to trick people. AI has just made it much easier to social engineer people. So, we really had to combat that with a lot of controls. And then instead of looking for these signs, is there a misspelled letter? Is there something that does not look right?

Instead of doing that because it's so perfected these days, you ask yourself, why would this person ask me for this information? So, make it more of a question rather than try to identify things in the email that are incorrect. … Don't have social engineering influence someone from doing the right thing because you want to help people.

The human intent is to try to help someone, especially a customer that's paying you. To make sure that you keep in mind security is a culture so that you influence them to do things the right way and not make exceptions for someone just because you're under pressure.

Q. Tom, being in the business of accurate factual, truthful information, if I have a question, I want to Google something, I'll get some AI responses. Now that's forcing me to do the old-fashioned, let's verify the information we've got. I've got to go to a second source. Is that what you're finding when you're dealing with the folks that you deal with, and Cyber SC, has it taken a position on AI that's radically different than it might have been two years ago? 

Scott: You asked earlier what conditions. For me, the conditions are, it has to be controlled, it has to be governed. I was recently at the University of South Carolina School of Law, 300 people, and I said, By show of hands, how many of you are using AI in the workplace? Ninety percent of the hands went up. I said, OK, keep your hands up if your organization adopted a policy on the use of AI before you started using it, and almost every single hand came down. So, I think controlled and governed is key. Folks at Milliken, I think you're doing a really good job. They've got a policy on AI emerging technologies. They've got a committee comprised of their C-level individuals, and they've sandboxed their own instance of AI, so it doesn't have the ability for contamination or bias from the internet.

I agree as we're talking about the students. I think a key piece there is we've got to teach our students how to use AI. We've got to teach them the right ways to use those tools. Because in the business world, it's already being used. We're doing our students a disservice if we don't educate them. To your point about the sophistication, phishing email recently had 10 communications in it. It wasn't just a from me to you, but there was a thread that looked like people had been communicating with each other to make it that much more sophisticated. The last thing I'll throw out, I heard a term the other day, BYO AI. 

We all know about the challenges of bring your own device. Well, guess what? I've got ChatGPT on my phone. All it takes is an internet browser. What's to keep any employee from buying the $20 version a month and using it in the workplace. … So, if it's not controlled and governed, you're going to find your employees are going to bring their own devices, they're going to bring their own AI.

Barbery: I think instead of just having the stick, allow the carrot. So, allow these improved sandbox, allow something for someone to use AI. So don't take away, OK, we're going to block AI completely. Someone’s going to go around, use a personal phone to do this, and they can easily get around the controls in place. It's impossible to block every type of AI from the organization, even if you wanted to. Allowing building the right policies and governance just like you mentioned, and then specific acceptable-use policies. Because really it's up to the individual. You mentioned earlier, how do you know if you search something in Google, how do you do a secondary research on it? 

At the end of the day, we make our employees responsible for whatever they're using the data for in a presentation or otherwise, they need to use personal responsibility to make sure that the information is accurate. Just like before, just when the internet started, you didn’t’ know whether to believe it, not believe it. It's still up to the individual to do that due diligence to make sure the information they're providing to their coworkers or to customers or so forth, it's up to them to make sure that is reliable and accurate. We handle that through acceptable-use policies, other emerging AI policies. … But there are just ways to get around. It isn't integrated to everything we do today. We have to make sure that we understand the use cases and then give someone the path to do it the right way instead of letting them do it behind their back.

Scott: You've got to train it. To your point earlier, I was using ChatGPT not only to write speeches, but to do marketing. And I was reading through it, and then I even asked, How can you keep listing services that we don't provide? It was like, Because you haven't taught me well enough.

Daughhetee: A lot of people forget that. All the AIs that are out there, those products aren't fully baked. The humans using it are the actual products, and we are the products to the AI companies, because every time we ask a question, it gives a response, and then we tell it. It asks, Was that what you were looking for? And yes, no. And it learns from that. And then the next time it does a response for someone else, hopefully it's a little closer. So, we, as the users, are actually the products that are being consumed by the AI companies. And people need to realize it's no different than Facebook and the others where they're like, Hey, here's your picture now. Throw a picture of yourself up when you were 10 years old. The AI is looking at it like, Let's do age progression. And that's how it learns those things.

Davis: Big data and machine learning. It’s all-encompassing.

Daughhetee: But there's a huge positive potential here, too. I mean, the time savings area. So, it's not all bad, but the guardrails have to be up. …The rules need to be in place. And I agree with you 100 percent. Don't try to tell people no. Don't always start with the stick. It's like, OK, yes, you can use it, but here are the guardrails. That way, they're not tempted to go use their phone. They're going to use your tool, your way of doing it, and you can at least have some level of safety built in.

Barbery: Unfortunately, cybersecurity teams are always known as the Department of No. And so, I think that my strategy is a little bit different working with the IT operations group working with our customers and our coworkers. I think it's much easier when you provide a way to do something rather than say no completely. Give someone an option.

Davis: That was exactly how I was trained. You tell them no, but you tell them these are your alternatives. This is what we can do. These are your options.

Q. I want to thank you for a great conversation today on a very important topic. I appreciate each of you participating. It's a great discussion. Your insight, extremely valuable. So thank you very much.