Robert Morris, and The Morris Worm—99 lines of code that changed cybersecurity forever.

The Night a Grad Student Broke the Internet (And Why Today We Celebrate National Computer Security Day)

A Curious Question, A Catastrophic Result

On November 2, 1988, at 8:30 PM, a 23-year-old Cornell graduate student named Robert Tappan Morris had a simple question: How big is the internet?

To find out, he wrote 99 lines of code—a self-replicating program designed to quietly count computers on the network. He released it from an MIT computer (to hide his tracks) and went to dinner.

By the time he got back, he’d accidentally crashed 10% of the entire internet.

The Morris Worm on Display at the Computer History Musuem
Internet Worm – decompilation:Photo courtesy Intel Free Press.

What Happened

Within 24 hours, about 6,000 of the 60,000 computers connected to the internet were grinding to a halt. Harvard, Stanford, NASA, and military research facilities were all affected. Vital functions slowed to a crawl. Emails were delayed for days.

The problem? A bug in Morris’s code. The worm was supposed to check if a computer was already infected before copying itself. But Morris worried administrators might fake infection status to protect their machines. So he programmed it to copy itself anyway 14% of the time—regardless of infection status.

The result: computers got infected hundreds of times over, overwhelmed by endless copies of the same program.

“We are currently under attack,” wrote a panicked UC Berkeley student in an email that night.

VAX 11-750 computer at the University of the Basque Country Faculty of Informatics in 1988
A VAX 11-750 at the University of the Basque Country Faculty of Informatics, 1988—the same year the Morris Worm struck. VAX systems running BSD Unix were primary targets. Photo: Wikimedia Commons

The Aftermath

The Morris Worm caused an estimated $100,000 to $10 million in damages. Morris became the first person convicted under the Computer Fraud and Abuse Act, receiving three years probation, 400 hours of community service, and a $10,000 fine.

But here’s the thing—Morris didn’t have malicious intent. He genuinely just wanted to measure the network’s size. His creation accidentally became the first major wake-up call for internet security.

The incident led directly to the creation of CERT (Computer Emergency Response Team) and sparked the development of the modern cybersecurity industry. The New York Times even used the phrase “the Internet” in print for the first time while reporting on it.

Why November 30th?

In direct response to the Morris Worm, the Association for Computing Machinery established Computer Security Day just weeks later. They chose November 30th specifically—right before the holiday shopping season—because cybercriminals love exploiting busy, distracted people.

That advice is even more relevant 37 years later.

The 1977 Trinity: Commodore PET, Apple II, and TRS-80 - Byte Magazine
The “1977 Trinity”: Commodore PET, Apple II, and TRS-80. Byte Magazine retrospectively named these three computers the pioneers of personal computing. When the Morris Worm struck in 1988, most people had never heard of “the internet.”

1988 vs. 2025: A Quick Comparison

Consider how things have changed:

Then: 60,000 computers connected to the internet.
Now: Over 15 billion devices.

Then: Total damage from Morris Worm: $100K-$10M.
Now: Average cost of a single data breach: $4.44 million.

Then: Attack motivation was curiosity.
Now: 97% of attacks are financially motivated.

Yet some things haven’t changed. The Morris Worm exploited weak passwords and unpatched systems—the same vulnerabilities that cause most breaches today.

ARPANET network map from 1977 showing the entire internet as just a handful of connected institutions
The entire internet in 1977—just a handful of connected institutions. By 1988, this had grown to 60,000 computers. Today: over 15 billion devices. Source: Wikimedia Commons (Public Domain)

What This Means for You

Computer Security Day isn’t just history—it’s a reminder that the basics still work:

Multi-factor authentication stops 99.9% of account compromises
Regular, tested backups can save your business from ransomware
Employee training dramatically reduces successful phishing attacks

And yes—the holiday season really is prime time for attacks. Stay vigilant through January.

One More Thing

Robert Morris never went to prison. After completing his sentence, he co-founded Y Combinator (the startup accelerator behind Airbnb, Dropbox, and Reddit) and became a tenured professor at MIT—the same school where he launched his infamous worm.

In 2015, he was elected a Fellow of the Association for Computing Machinery—the organization that created Computer Security Day in response to his attack.

The lesson? The person who exposed the internet’s greatest vulnerabilities is now part of the establishment working to secure it. Threats evolve. Defenses must evolve too.

The question is: will yours?


Take Our 2-Minute Security Assessment →

CentrexIT has been protecting San Diego businesses since 2002. Questions about your security? Let’s talk.

Software engineer copying proprietary code into ChatGPT browser window on desk, unaware of data leakage to external servers.

The ChatGPT Confession: How Your Employees Are Accidentally Leaking Proprietary Data to AI

Your employees aren’t trying to sabotage your company. They’re just trying to be productive.

A Google engineer copies a few lines of proprietary code into ChatGPT to debug a problem. A Samsung employee pastes semiconductor design specifications into a prompt, asking the AI to help optimize performance. A healthcare administrator shares a de-identified patient dataset (they think) to train an AI model for internal use. A financial analyst includes client account numbers in a spreadsheet she uploads to an AI tool for analysis.

<< Schedule your Cybersecurity Risk Assessment today >>

Read more “The ChatGPT Confession: How Your Employees Are Accidentally Leaking Proprietary Data to AI”

Deepfake video attacks are targeting San Diego businesses with AI-generated CEO videos requesting wire transfers. Learn 7 defenses against video deepfake fraud before it's too late.

The Video Call Requesting Money—That Wasn’t Real

A finance manager at a multinational company joins what appears to be a routine video conference. On screen: the CFO and several other executives. They need urgent approval for a $25 million transfer. The faces are familiar. The voices match. The urgency seems reasonable.

The transfer is approved. Days later, the company discovers the truth: every person on that video call was an AI-generated deepfake. The $25 million is gone.

This isn’t a hypothetical scenario. It happened in 2024. And according to Keepnet Labs research, more than 10 percent of companies have now experienced attempted or successful deepfake fraud, with losses from successful attacks reaching as high as 10 percent of annual profits.

For healthcare organizations, life sciences companies, and nonprofits operating on tight margins, you’re not immune. You’re actually more vulnerable.

<< Schedule your Cybersecurity Risk Assessment today >>

Read more “The Video Call Requesting Money—That Wasn’t Real”

Hands gently holding a wooden symbol of community/house, with blurred digital graphics in the background representing centrexIT cybersecurity.

AI for Nonprofits: Protecting Donor Data, Securing Peace of Mind

AI for Nonprofits: Protecting Donor Data, Securing Peace of Mind

As a nonprofit leader, your heart is in your mission: serving your community, advocating for your cause, and making every donor dollar count. You also seek innovative ways to amplify your impact. New technologies like Artificial Intelligence (AI) are definitely on your radar. AI promises exciting efficiencies, from streamlining communications to enhancing data analysis. However, like any powerful new tool, it also introduces new considerations, especially when it comes to the sensitive donor and client information you manage.

The challenge isn’t just about understanding the technology; it’s about safeguarding the very trust your organization builds. Carelessly using certain AI tools with sensitive data isn’t just an “IT issue.” This directly threatens your financial stability. It risks operational continuity. Most importantly, it jeopardizes the hard-earned trust of your donors and the integrity of your mission. Therefore, you must understand and proactively address these evolving risks. This is essential for secure growth in the modern nonprofit landscape.

Read more “AI for Nonprofits: Protecting Donor Data, Securing Peace of Mind”