
There’s been a cycle throughout time in the field of computers, a pattern that has been repeated several times. The IT and engineering teams invent to create new technology. They create an infrastructure, and its use reaches its peak. In a short time the modern, cutting-edge technology has been widely used by companies. In the case of infrastructure, this process has taken place over time, starting with mainframes and later computers, servers, and virtual servers. We moved from mainframes to servers on-premises and finally the web for applications.
However, every time this cycle has taken place, security technology has lagged behind new technologies and expansion. In general, it takes years for companies to work out how to protect new technology. Unfortunately, when we get better at securing something new, the world shifts to new technology, and all the security lessons learned from the previous one are thrown away. Ironically, the responsibility of configuring, managing, and monitoring security is usually the reason for moving to new technology.
Reasons For The Cycle
Why is this happening? The main criteria is that the people involved in the early creation of the new technology don’t conduct adequate security analysis. They’re focused on the capabilities they want to provide. They are under the notion that security personnel will “take responsibility” for any back-end problems.
Because security is a last-minute consideration whenever innovation is made and is a target for cybercriminals and a few skilled security researchers from the community who are the ones who conduct the first “penetration test” of the latest technology, in the end, security incidents happen, and people are injured. However, organizations end up doing a better job securing their platforms because of these breaches.
Another Cycle: Cloud-Native Technologies
The cycle repeats itself with cloud-native technologies such as serverless applications and APIs (APIs). Security, however, is still slow to catch up. Many security protocols were created to support the latest round of innovation — the traditional monolithic web-based applications don’t work with cloud-native platforms. We need different authentication methods, session management, authorization, and methods for validation of inputs such as.
However, it’s easy to understand why serverless applications are attractive. AWS Lambda is the most popular cloud computing system, allowing you to build an application with only a few lines of code but without the expense and stress of managing the servers, it runs on. However, many don’t consider the many pages of software that may be required to provide sufficient security and visibility for the application.
Repeating The Same Mistakes
I’ve been through some of these storms. It seems that the way these waves are handled is at the heart of several issues we face in cybersecurity. Since security isn’t integrated into new technologies from the beginning, hackers quickly gain access to the system and cause massive damages before we catch up.
There has been a massive conversation regarding the cybersecurity shortage of skills. The cybersecurity workforce is mainly vacant, and it causes severe problems for numerous organizations. However, the severity of the skill shortage is due to a particular method of conducting security. This method is reactive, not proactive, and requires the labor-intensive “brute force” approach to responding to threats. There is a need for more bodies in cybersecurity, as our strategy can be described as “throw many more people at issue.”
In other words, instead of creating threat models and implementing solid, proactive controls, as throwing make an application, companies look for vulnerabilities, then examine the scans manually and then manually address the vulnerabilities — or let vulnerabilities build up. This takes up a lot of assets and does not make an organization any more secure as if it did nothing.
Moving Beyond Brute Force
Although most people think there is a rationale for going beyond this scattershot method, it has a significant gravitational force. IT governance policies in several organizations require old security tools and processes, while other forms would offer greater security while using fewer resources. In the same way, the ever-changing marketplace creates constant pressure to develop applications faster than they currently are. It is easy to jump into development more quickly than making an effort to design the application secure before coding.
What do we do if we could break from the pull of reactive security and focus on the things that matter? We could incorporate security into the new technologies as they develop instead of making it an afterthought. We can be regular, prioritized, centered, organized, and strategic in applying the tools, processes, and people. We can help developers write more secure code by giving immediate feedback.
In the same way we must make security more obvious. If people had an idea what software was more secure and which one was less secure and which was less secure, they’d choose according to. It was reported that the White House issued an executive order in May, which could possibly move us towards this direction. For example, it calls for software providers to supply the “Software Bill of Materials,”” some sort of “ingredients list” for their software. We need to know more regarding the reasons we should believe that something is safe before we can trust it with crucial things such as elections, finances, and healthcare, as an example. Through better processes that incorporate safety into the designs of all new technologies, and with better labeling to help users comprehend the security implications of the technology they are using, we can provide the solution that everyone desires — technology users can depend on.