Cybersecurity: A Holistic Approach

I. Introduction

Thank you for that kind introduction and thank you for inviting me to speak as part of Moody's Big Picture Speaker Series. Credit rating agencies – and credit analysts, such as yourselves – play an important role in the financial ecosystem. Your research, analysis, and ultimately your ratings, help investors make more informed decisions. Better informed decisions, in turn, enhance the efficiency of our capital markets.

Today, I'd like to share my thoughts on a topic that has captured my attention over the past several years: the internet, or cyber, and its risks. Cyber provides unprecedented opportunities by connecting individuals and organizations to each other and the world. It affects our capital markets, financial systems and, more broadly, our economy and national security. But these opportunities come with significant risks.

Technology is only one part of the equation to address cyber risk. Increasingly, financial and other regulators expect the companies they regulate to embed cybersecurity into their own control infrastructures. Regulators also expect companies to understand the security safeguards in place at entities with access to those companies' networks, systems, and data.

A holistic approach to cybersecurity is what governments, regulators, customers, investors, and the public expect and deserve. These constituencies also expect boards and executives to actively oversee governance, control, and enterprise risk management programs with cybersecurity built-in.

Let's talk about what this means. I'll share my views on cybersecurity informed by three different experiences: as a former U.S. Treasury official; as the past head of a global consulting practice who advised financial institutions on cybersecurity; and as a current board member overseeing the work of auditors at the Public Company Accounting Oversight Board (PCAOB).[1]

II. Setting the Stage

A. Yesterday

Just over 30 years ago, the first cyber-incident began benignly enough. A graduate student from Cornell University, Robert Tappan Morris, wanted to gauge the reach of the then-fledgling internet. He developed a simple yet clever way to do so.

Morris created a self-replicating computer program (or worm). Once launched, the worm was supposed to crawl through the network of computers that made up the internet. By exploiting security vulnerabilities and guessing weak passwords, the worm would install itself on any computer connected to the internet that did not already have a copy of the program installed on it. Once fully deployed, the worm was then supposed to calculate the number of computers connected to the internet based on the number of successful installations.

Simple enough in theory. But then theory crashed headlong into reality.

Fearing that some computers might have security features that could trick the worm into erroneously believing that a copy of the program had already been installed when it had not, Morris had an idea. He would write a command to outsmart these potential security features. That's when things went downhill.

The new command caused the worm to install itself on any computer it encountered one time out of every seven attempts – even if the computer indicated that the program had already been installed. In effect, installing the software using brute force. What happened? The worm installed itself on thousands of computers multiple times, and with each new installation, the processing power of the affected computers slowed. Eventually the infected computers crashed. The result? The first known Distributed Denial of Service (or DDoS) attack.

Now infamously known as the "Morris Worm," the program damaged approximately 6,000 computers, or roughly 10 percent of the computers then connected to the internet. The estimated cost of repair? As high as millions.[2]

According to Morris, the whole thing was a misunderstanding, an accident. The government did not view it that way. Morris was prosecuted and achieved the dubious distinction of being the first person convicted for violating the Computer Fraud and Abuse Act of 1986.[3]

At trial, Morris maintained that his motive was not criminal but rather merely "to demonstrate the inadequacies of current security measures on computer networks by exploiting the security defects [that he] had discovered."[4] The jury saw it differently. Morris was sentenced to three years of probation and 400 hours of community service, fined $10,050, and ordered to pay the costs of his supervision.

What ultimately became of Morris? He is now a professor at MIT.[5]

B. Today

Fast forward to today. Instead of being used by a small, discrete group of academics, researchers, and computer enthusiasts, the internet and the computers connected to it encapsulate much of our lives.

We communicate through the internet. We use it to engage in commerce. Organizations of all types – energy, healthcare, financial services, nonprofits and humanitarian groups, and governments – operate on the internet. Vast amounts of personal and other data are accessible there as well.

Everyday objects – so-called "Internet of Things" devices – are now all connected. Personal computers, smartphones, cars, thermostats, wearable gadgets, lights, and cardiac monitors – to name but a few – all send and receive huge amounts of data largely unfettered by country borders.

To fully appreciate the magnitude, scope, and speed of this change, think about this: By next year, internet-connected devices are expected to number almost 31 billion.[6] This translates into nearly four devices for every man, woman, and child on the planet. That is an increase of more than 500,000 percent since the ill-fated launch of the Morris Worm. Astounding!

In hindsight, the Morris Worm proved to be a canary in the coal mine, warning us of the potential perils of the internet's unprecedented access and interoperability when combined with the ubiquitous use of computers and technology. It also warned us about the dangers of treating security as an afterthought.

Since the release of the Morris Worm, threat actors have changed as well. They have moved well beyond misguided graduate students and computer fanatics hoping to prove a point. Until recently, these actors fell into distinct categories based on their perceived motives. For example, hacktivists engaged in vandalism through the internet to damage or disable systems to call attention to their causes or just to embarrass institutions and disrupt operations.

Lone crooks, and then increasingly criminal syndicates, engaged in economic misdeeds, such as stealing financial account information and credentials. Armed with this stolen data, the culprits triggered the diversion of funds or other assets. They also deployed ransomware, a type of malicious software designed to block access to files until a sum of money is paid. Regardless of the techniques used, their goal was to make money.

Rogue nation states began to use the internet for geopolitical purposes. In 2014, North Korea attacked Sony Entertainment Pictures. The attack destroyed systems and wiped out massive amounts of data. Unreleased movies were posted to the internet, devaluing their underlying intellectual property. Highly sensitive, private information and communications were publicly released to embarrass company executives.

Today threat actors and the types of tactics they use are increasingly merging and blurring. Nation states hire mercenary hackers and criminals to help them perfect their cyber activities. Some of those rogue nations or their proxies exploit their tradecraft to steal funds.

Military and intelligence personnel with nation states moonlight with criminal enterprises to make extra money. And, on the dark web – a digital bazaar for illegal activities – even relatively unsophisticated actors can purchase malicious software to engage in crimes.

Why does this matter? Increasingly, cyber-criminals can use methods that a short time ago were in the sole purview of sophisticated nations. For companies and other institutions, this means they must prepare for sophisticated threat actors whose motives and capabilities are less clear.

III. The Role of Treasury

A. The Imperative

When I joined the U.S. Department of the Treasury in the fall of 2014, the department's approach to cybersecurity tended to be reactive. Treasury – and the financial sector and its regulators – typically responded to cyber threats, vulnerabilities, and incidents after the fact. More firefighting than fire prevention.

When I left Treasury two and a half years later in 2017, we had a more proactive, strategic approach to cybersecurity. Why the change?

Two years earlier, at the end of 2012, the websites of six large U.S. banks – Bank of America, JPMorgan Chase, Citigroup, PNC, Wells Fargo, and U.S. Bank – were all targeted with coordinated DDoS attacks.[7] Those attacks severely disrupted the online and mobile banking services of the institutions. It was a wake-up call for each company, but also for the financial sector in general and Treasury.

We came to recognize cyber threats as one of the most pressing operational, economic, financial stability, and national security risks of our time. Cybersecurity became a top policy priority. We also recognized that to respond most effectively, we needed to act together. The government and private institutions had to share responsibility for protecting our individual and collective assets.

We recognized too the importance of building in security upfront and became acutely aware that resilience – the ability to recover quickly from significant cyber-events – was a key part of the calculation. We knew that we also needed to join forces with like-minded countries across the globe.

B. The Mission

One of Treasury's key missions is to maintain a strong economy and strengthen national security by combating threats to the U.S. financial system and economy. So, when we thought about cybersecurity, we considered it through the lens of our mission. We also considered cybersecurity through Treasury's role as the federal agency responsible for coordinating government-wide activities effecting the financial sector – a sector designated as critical to the United States.

As those of you who cover the financial sector know, it has a diverse set of companies – ranging from banks, insurers, broker-dealers, and investment advisers to financial market utilities like exchanges, payment systems, and clearing houses. The sector also has an assortment of regulators and supervisors with distinct missions and statutory and regulatory authorities. Those authorities are not identical; and their approaches can vary when overseeing regulated entities' IT systems and technology and the protection of customer data.

C. The Strategy

Given these varying regulatory approaches, Treasury's strategy for the sector centered on four themes.

First, we highlighted the need to identify and adopt baseline protections and best practices for cybersecurity and resilience. Second, we encouraged private institutions to share timely, actionable information on cyber vulnerabilities, threats, and incidents with each other and ideally with the government. Third, we pushed for enhanced response and recovery efforts, including by encouraging institutions to better prepare for the likelihood – or inevitability – of a significant incident. Finally, we prioritized and drove coordination domestically across the sector, as well as internationally.

(i) Domestic efforts

We began domestically by pushing two initiatives. First, we advocated for the development and use of a common language – or lexicon – to talk about cybersecurity. That lexicon was largely based on a framework developed by the National Institute of Standards and Technology, or NIST.

Why was a common lexicon so important? We knew a shared language would help develop a common vantage point from which all constituencies and stakeholders could clearly communicate cyber-related risks and responses. We needed all parties to use the same words so that they would not talk past one another using imprecise terms or technical jargon.

Second, once that common language was embraced, we encouraged the sector and its regulators to develop a common risk-based approach to cybersecurity. That approach – built on and incorporated key concepts from enterprise risk management – and had institutions identify, mitigate, measure, and manage their specific cyber risk.

(ii) International Efforts

Our work did not stop at our borders. At the time, there had been a series of cyber-enabled thefts using the international payment messaging system operated by the Society for Worldwide Interbank Financial Telecommunication (or SWIFT). This system connects 11,000 financial institutions across the globe, including central banks. While not used to transmit funds, institutions use the system to send and receive details for the transfer of money, basic information such as the amount, sender, and receiver of the payment.

The most significant cyber incident involving SWIFT occurred in 2016. Cyber actors stole credentials to the SWIFT system. Then, masquerading as authorized users, the criminals attempted to steal $1 billion from the central bank of Bangladesh. The crooks ended up walking away with $81 million, most of which has never been recovered.[8]

That incident served as a wakeup call on how deeply interconnected the international financial system is. Therefore, to improve cybersecurity domestically, we knew we had to bolster it internationally. We also knew that our efforts could be thwarted by bureaucratic inertia. Therefore, the Treasury Deputy Secretary, Sarah Bloom Raskin, co-chaired a group empaneled by the finance ministers and central bank governors from the Group of Seven (G7) advanced economies to focus on cybersecurity.[9]

Over the course of 18 months, this group of cyber experts developed a set of principles on cybersecurity for public and private entities in the financial sector. In October of 2016, the G7 economic leaders endorsed and published those principles.[10] This was first time that these leaders together had ever publicly communicated operational or policy guidance.

These principles embrace best practices, are process oriented, and use a dynamic, risk-based approach. They are also thematic and designed to be tailored to the particular characteristics of an organization and its cyber risks.[11] As a result, the principles apply to all types and sizes of entities across the financial sector – from depository institutions, exchanges and payment systems, to fintech companies and third-party service providers. The principles also apply to regulators and government agencies.

The principles call on organizations to identify their specific cyber risks and to develop strategies to address those risks, such as deploying technology and other measures to mitigate or manage those risks. They also call on organizations to re-assess periodically – and also when the environment changes – the effectiveness of their strategies in response to changing operational, control, and threat environments. Ultimately, the principles were designed to better arm boards and senior public officials to oversee their organizations' cybersecurity.

IV. The Roles of Boards and Executives

Once I left Treasury, I applied many of the lessons learned there to help clients – primarily boards and executives – develop and implement cybersecurity strategies throughout their organizations.

An initial challenge for us at Treasury had been elevating cybersecurity from server rooms to board rooms and executive offices. The next challenge became supplying boards and executives with the information they needed to effectively oversee and drive cybersecurity at their organizations.

At the time and even today, many boards and senior leaders are not meaningfully engaged in overseeing the cybersecurity programs at their organizations. Likely causes are unfamiliarity and insufficient information, or the belief that it is someone else's responsibility – often someone with more technical savvy.

We asked whether there was a way to make cybersecurity more accessible and better frame the issues for boards and executives. An obvious choice was to frame cybersecurity in terms of those individuals' fiduciary duties of care. Directors and executives have fiduciary obligations to ensure that their institutions have appropriate systems and processes in place to meet their legal and regulatory responsibilities, including around cybersecurity.[12]

Technology provides only part of the solution. Processes and people are the other parts. Financial and other regulators expect the entities they regulate to embed cybersecurity into their core governance, control, and enterprise risk management infrastructures. This means making cybersecurity part and parcel of a company's own systems, operations, and processes to create multiple levels of defense against cyber events. It also means training personnel at all levels on cyber threats and what they can do to respond. Such an approach better ensures that the collective defenses of a company work effectively at times of attack and cannot be circumvented, removed, or defeated.

Embedding cybersecurity also means making sure that the cybersecurity of a company's vendors or any third party with access to its systems or data, is up to par. How? Companies need to understand the security safeguards in place at any entities with access to their networks, systems, and data, as well.

Increasingly, this type of holistic approach to cybersecurity is what regulators, customers, credit rating agencies like Moody's, investors, and the public expect. These constituencies also expect boards and executives to actively oversee the design and effectiveness of these programs.

V. The Roles of Auditors and the Public Company Accounting Oversight Board

Let me finish by briefly describing the role that the PCAOB plays related to cybersecurity.

Sixteen years ago, Congress created the PCAOB in the wake of a series of high profile corporate and accounting scandals at Enron, WorldCom, and other corporate stalwarts. Common themes of these scandals were fraudulent financials statements and questions regarding auditors' objectivity and impartiality. The PCAOB oversees the audits of public companies and SEC-registered broker-dealers to protect investors and further the public interest through the preparation of informative, accurate, and independent audit reports.

Just over a year and a half ago, the SEC installed an entirely new board at the PCAOB. Our job was to reassess each of the PCAOB's core programs – registrations, inspections, standard setting, and enforcement – to determine where the organization had done well and where we could do better.

As it relates to cybersecurity, I have been challenging the PCAOB and myself to think more broadly. How could or should the PCAOB approach cybersecurity? Today, I think about our role related to cybersecurity three ways:

First, under our current standards, auditors of public company financial statements play an important but limited role related to cybersecurity. Auditors are required to assess the use of IT to prepare financial statements and the automated controls associated with financial reporting, such as controls around the reliability of underlying data and reports.[13] This approach addresses financial reporting risk. Therefore, I'm pressing our inspectors to probe whether auditors are meeting their obligations under our current standards.

As part of their risk assessment and audit planning, pursuant to PCAOB standards, I have called on auditors to broadly consider cybersecurity risks that could have a material effect on companies' financial statements.[14] But our standards do not require auditors to assess a company's overall business or operating risks as it relates to cybersecurity. Therefore, I'm also asking whether the PCAOB's auditing standards should or could require more.

Second, I'm focused on the obligations that auditors have to protect sensitive information that clients have entrusted to them, as well as any access to client systems provided to auditors. Today, PCAOB standards do not specifically address the cybersecurity of auditors themselves or even explicitly require auditors to maintain the security of their clients' information or systems.[15] Our standards do, however, require audit professionals to exercise their skill with "reasonable care and diligence."[16] I'm asking questions here too about what more audit firms should or could be doing around cybersecurity to secure their clients' data and system access. I'm also asking whether our standards sufficiently address cybersecurity of auditors.

Finally, as part of the PCAOB's strategic plan, the board has committed to maturing our information security program by strengthening our risk management and data loss protection capabilities.[17] My attention is focused here as well.

VI. Conclusion

Cyberspace provides unprecedented opportunities. But with these opportunities come the ability for foes to engage in damaging mischief and criminal activity, to undermine our domestic economic health and financial stability, as well as to cause potential national security concerns.

Sitting thousands of miles away, attackers can steal, cause physical destruction, undercut the integrity of our markets, and attempt to intimidate and challenge our fundamental values and beliefs. But working together we can bolster the cybersecurity of individual organizations and, in turn, that of our nation.

Thank you for your attention. I'd be happy to answer your questions.

[1] The views I express here are mine alone, and do not necessarily reflect the views of the PCAOB, my fellow board members, or the PCAOB staff.

[2] FBI News, "The Morris Worm: 30 Years Since First Major Attack on the Internet" (Nov. 2, 2018). Available at https://www.fbi.gov/news/stories/morris-worm-30-years-since-first-major-attack-on-internet-110218.

[3] 18 U.S.C. § 1030.

[4] United States v. Morris, 928 F.2d 504, 505 (2d Cir. 1991). See also FBI News, The Morris Worm: 30 Years Since First Major Attack on the Internet.

[6] Sam Lucero, IoT Platforms: Enabling the Internet of Things, IHS Technology at 5 (Mar. 2016).

[7] Nicole Perlroth, "Attacks on 6 Banks Frustrate Customers," The New York Times (Sept. 30, 2012). Available at https://www.nytimes.com/2012/10/01/business/cyberattacks-on-6-american-banks-frustrate-customers.html.

[8] annual European Financial Services Conference, Brussels, Belgium (May 24, 2016). Available at https://www.swift.com/insights/press-releases/gottfried-leibbrandt-on-cyber-security-and-innovation; Sergei Shevchenko, "Two Bytes to $951M," BAE Systems Threat Research Blog (Apr. 25, 2016). Available at http://baesystemsai.blogspot.co.uk/2016/04/two-bytes-to-951m.html.

[9] In addition to the United States, this group includes Canada, France, Germany, Italy, Japan, the United Kingdom, and the EU.

[11] While the commentary to the elements explains how they apply to the financial services sector, the elements themselves transcend sectors and can better equip boards and senior public officials at all types of organizations to enhance their cybersecurity and resiliency.

[12] Directors have a fiduciary obligation to assure that their corporation has information and reporting systems in place to facilitate compliance with applicable legal standards. See e.g., In re Caremark Int'l Deriv. Litig., 698 A.2d 959, 970-971 (Del. Ch. 1996).

[13] Paragraph .B1 of AS 2110, Identifying and Assessing Risks of Material Misstatement, Appendix B, Consideration of Manual and Automated Systems and Controls.

[14] Annual Financial Reporting Conference, Baruch College Robert Zicklin Center for Corporate Integrity World Continuous, New York, NY (May 2, 2019).

[15] Paragraph A72 of AS 1215, Audit Documentation, Appendix A, Background and Basis for Conclusions.

[16] Paragraph .05 of AS 1015, Due Professional Care in the Performance of Work.