Blogs The Evolution of Cyber Ri...
January 16, 2023
As computer systems become more and more integral to our lives, the need for robust cybersecurity increases. Simply securing data through technical safeguards is no longer enough. In today’s digital landscape, human error carries tremendous weight on cybersecurity. The role of Human Risk Management in cybersecurity is a significant one. To understand why and how this has shifted through the years, let’s take a look at how cyber risk management has changed.
According to Jeff Yost, in his article, A History of Computer Security Standards, the field of cybersecurity got its start alongside the development of the first computers, developed by the US Department of Defense in the late 1960s. Beginning with the Advanced Research Projects Agency Network (ARPANET), a pre-internet network of interconnected computers widely regarded as the first real network as we know it, governmental agencies relying on this network naturally relied on it to be secure.
As computer technology moved from private, governmental usage to public usage, this need for security continued, but it also became more complicated. Like other attacks against communication systems, such as the manipulation of tones used by the telephone system in order to gain access to the network, make free calls, or otherwise exploit telecom networks, the initial attacks waged against the early computer networks were also primarily based on seeking out and exploiting technical weaknesses.
Eventually, telephone companies got wise to this phone hacking, or “phreaking” activity, and changed the way the entire system worked. But computer networks were more complex, and as time went on, and the internet expanded, this ever-increasing complexity led to more and more network vulnerabilities, just waiting to be exploited. In the 1980s, commercial antivirus software began to be developed in response to these growing threats and new viruses. Each antivirus program hoped to stay one step ahead of the exploits as they emerged.
The 1980s saw the emergence of the first commercially available antivirus software. Companies such as Symantec, McAfee, and Trend Micro led the way in providing reliable protection against malicious programs. This period also saw an increase in computer viruses, as hackers began to target personal computers and business networks. In addition, this decade marked the rise of Internet security and the emergence of the international Computer Emergency Response Team (CERT).
The 1990s saw further development in cybersecurity, with the launch of firewalls, which act as a barrier to keep unwanted threats out. Companies such as Check Point Software Technologies created firewall systems that help protect against intrusions and malicious attacks. This decade also saw a huge increase in the number of cybercrimes, with hackers increasingly targeting critical infrastructure and corporate systems. As a result, governments around the world began to focus on improving their cybersecurity capabilities by introducing laws and regulations to protect citizens from online threats.
The start of the 21st century saw the emergence of sophisticated cyber threats. As technology continues to evolve, so too does the sophistication of malware and other malicious software. The 2000s saw a rise in phishing scams, ransomware, and zero-day attacks, prompting governments and companies to increase their investments in cybersecurity solutions. This decade also marked the beginnings of large-scale data breaches as hackers began targeting sensitive data.
The 2010s saw the emergence of cloud computing, which enabled companies to store data remotely on shared networks. However, this also created new risks, as malicious actors were now able to target multiple systems at once. To protect against these threats, businesses began investing heavily in security measures such as multi-factor authentication, encryption, and cyber insurance. This decade also saw an increase in the use of artificial intelligence (AI) and machine learning algorithms to detect potential threats before they can cause any damage.
The 2020s have brought even more challenges to the world of cybersecurity, including state-sponsored attacks, quantum computing, and the rise of 5G networks. In response, governments and companies are investing heavily in innovative technologies such as blockchain and artificial intelligence to help protect their systems from malicious actors. As cybersecurity continues to evolve, we expect to see further advances in the technology used to protect our data and networks. Cybersecurity is an ever-changing field, but with the right information and resources, we can all stay safe online.
By understanding the history of cybersecurity, businesses, organizations, and individuals can create an informed approach to security. Whether it’s protecting against malicious threats or building a secure infrastructure for data storage, having a knowledge of the past can help us shape the future of cybersecurity.
The use of “cyber”-everything started with “cybernetics,” popularized by mathematician Norbert Wiener in the 1940s. For his book Cybernetics, Wiener borrowed the ancient Greek word “cyber,” which is related to the idea of government or governing. He described a futuristic idea—that one day there would be a self-governing computer system that ran on feedback.
According to the Oxford English Dictionary, there’s evidence of the prefix “cyber-” usage going back to 1961; however, it became popular in the 1990s. The term became prevalent with the invention of the World Wide Web, and the early 1990s saw the predominance of words like cyberbully, cybercommunity, and cyberwar.
As cyber threats continue to evolve, it’s vital that security measures continue to transform as well—including the approach to security awareness and training. A 2022 Data Breach Investigations Report from Verizon found that 82% of breaches involved errors in human behavior, making it the biggest threat to cybersecurity. Because of this, the best way to reduce cyber risk is to take human behavior into account, not by assuming that humans will always be risky, but by understanding what behaviors lead to cybersecurity risks, and which ones can be changed. It’s simply not good enough, or effective enough, to let 82% of breaches fly by with a, “Well, what can you do?” approach.
Turns out, there’s a lot we can do.
As concerns about cybersecurity have continued to expand, so too have perspectives on security awareness and training. In the early days of computing, the field of cybersecurity ran parallel to the development of computers themselves, but today, the field of cybersecurity is its own complex arena.
The field of security awareness and training developed out of a need to inform employees across an organization of threats and to train them how to respond to them. Today, most companies have a dedicated team of cyber risk management specialists who monitor and report on threats and deal with incidents as they arise. As computer access became more widespread, security teams recognized that more and more threats were targeted at regular users, many of whom might not have the same level of technical savvy as those with an IT background.
Security leaders train employees to be aware of threats and how to respond to them by informing them of potential risk areas and giving them information about how to prevent an incident. This type of training typically covers a few key categories, including:
The most important of all of the categories covered in cybersecurity training is Social Engineering. Social engineering is the psychological manipulation of human behavior that allows cybercriminals to thrive. As noted hacker-turned-security-expert Kevin Mitnick states in his book The Art of Deception, “Why are social engineering attacks so successful? It isn’t because people are stupid or lack common sense. But we, as human beings, are all vulnerable to being deceived because people can misplace their trust if manipulated in certain ways.”
Human Risk Management faces this risk head-on by changing the cybersecurity risk management model. Instead of assuming that human behavior will always be the weakest link in the security network, Human Risk Management starts with the idea that human behaviors can be changed in order to support a safer security culture, a more empowered workforce, and an organization where every employee can be at the front line of defense against all kinds of attacks.
What are your employees doing when faced with certain threats, such as phishing, remote work, and other key security principles that could lead to vulnerabilities?
Analyze and report on any weaknesses, show trends, and preempt threats before they occur. Knowing what we now know, which groups’ behaviors might potentially put company data at risk?
Provide training to these users and groups to redirect them from disempowered to empowered. CISOs can turn human risk into a security culture that empowers each and every employee to know what to do, and when to do it.
Cybersecurity is changing, and if history is any indication, it will continue to change, grow, and adapt to new challenges. More and more companies are considering how best to integrate things like long-term remote work, new security architectures, and higher supply chain risk, into their cybersecurity frameworks.
It’s vital that security leaders continuously evaluate the security management of their organization. Thankfully, there are tools emerging on the market that allow these leaders to gain insight into their worker’s behavior, assess the risk, and apply the findings. Once they know how to assess cyber risk with the human element in mind, the whole culture can change for the better.
Human-driven risk thrives in fear, uncertainty, and manipulation but effective Human Risk Management only grows stronger with knowledge, confidence, and empowerment. Whether it’s preventing employees from doing what they shouldn’t be doing, to encouraging them to do things correctly that they might be skipping over or forgetting, simple changes and engaging training can go a long way to preventing cyber attacks before they start. At Living Security, our Unify Human Risk Management platform can change the cybersecurity game for your organization starting today.
Want to learn more? Let's talk.