In an accelerated digital transformation environment, the mix of on-premises and private cloud systems makes securing data even more complex than ever before – however organizations from large enterprise to SMBs are still lacking the fundamentals of security with the human factor still a large threat vector that could potentially impact the business. Humans tend to make the worst mistakes – it is very difficult to change human behavior so there are cyber fundamentals that are still lacking, for example employees are still receiving emails from the Nigerian Prince that is trying to offload fictitious money and employees are still falling for it. At the enterprise level – the enterprise suffers from managing the chaos to include a great deal of abuse and loss of resources due to the enterprise focusing on too many things that are not very effective and spending time in the wrong areas. For the consumer, the issue is still the human who doesn’t know what they clicked on and downloaded – this is either a governance resource issue or the human where there still does not fully understand how an email can cause serious harm to the business and individual.

Security Begins at the Top:

Being a cyber-leader does not require deep technical expertise, but rather an ability to change the culture of the organization. Reducing the organization’s cyber risks requires awareness of cybersecurity basics. As a leader, the individual must drive the organization’s approach to cybersecurity as the individual would any other threat. This requires an investment of time and money, as well as the mutual buy-in from the management team. The investment drives actions and activities, and these build and sustain a culture of cybersecurity awareness.

Cyber Fundamentals: The “Low-Hanging fruit”

  • Not Implementing MFA: Multifactor is a key component to achieving zero trust as it adds a layer of security to access a network, application, container or database by requiring multiple forms of identity to prove who they are. This is one of the easiest to implement and it forces malicious actors to need more than just a username and password. This increases the difficulty for attackers exponentially who are looking to gain access to sensitive information and adapts very well to the current distributed computing model, zero-trust environments and remote employment.
  • Excessive Access Rights | Privilege Escalation: Many of the most devastating breaches over the past 15 years have been perpetrated by insiders. Perfect example is the Solarwinds Orion attack as the core issue is that the global administrator accounts are often needed for legacy applications like Orion to operate correctly and cannot operate using the concept of least privilege which the threat actors leveraged the unrestricted privileged access throughout the victims environments using the application. For large enterprises or SMBs to effectively mitigate these instances is to first identify all of the systems and applications in the environment that requires “root” or “sudo” privileges. Role-Based Access Control (RBAC) has been around for quite some time and implementing it consistently has become more and more complex due to the modern use cases with the growth of cloud services and third-party software and fourth-party code dependencies – a unified approach to RBAC is critical to reducing risk and meeting evolving compliance requirements.
  • Insufficient Security Training: The most dangerous threat vectors are inside the business who have access to sensitive data and all they need is a motive whether it be financial or personal gain. To be a hardened company from the inside, outside and everywhere in between, all employees from every department need to incorporate the security training into their culture. Layer training exercises with phishing simulations and event-based learning how to identify real events and delivering it at a defined frequency (weekly, quarterly, annually, etc).
  • Compliance Gaps: Another issue is not knowing where the organizations regulatory data resides and now as the business environments become extremely complex data is stored, transmitted and processed at a mix of cloud service providers (CSPs) and on-premises architecture. For example – a healthcare company processes, stores and transmits federal health data that healthcare provider has to be compliant with at least HIPAA and DoD RMF and maybe FedRAMP – by conducting a shared responsibility exercise, receiving service level agreements (SLAs) from those CSPs to ensure confidentiality, integrity and the availability of data is being met. The purpose of conducting a shared responsibility exercise is to establish who is responsible for specific security controls – for example just because the organization leverages AWS RDS for SQL Server Web services – AWS is not scanning those RDS clusters – thus the responsibility would be on the organization to place a vulnerability scanner agent within that cluster.
  • Supply Chain Risk Management: Cloud services today are built almost 100% on 3rd party tools – for example monitoring tools, platform services for data storage, Lambda, Machine learning, CI/CD roles, etc. Software is a very large avenue of cyber supply chain risk that is gaining a great deal of awareness with the emergence of DevSecOps and agile operations – organizations will need to begin to understand the impact of poorly developed software can have on their security posture. That software has to operate on either virtualized hardware or bare-metal that is still operating on legacy technology or is poorly configured. For example – organizations for the most part have strict portable device policy like prohibiting the use of USBs on all company owned hardware – however we do not think twice before running a yarn install or a Docker pull – both situations involve taking code from another source that is not trusted and running it anyway but with yarn, Docker, npm, etc – these will eventually find its way into production.
  • Continuous Monitoring Strategy: Traditional “perimeter” protections and continuous monitoring mechanisms will not cut it – the attack surface is constantly moving away from traditional central control points. Continuous Monitoring is another aspect of the risk management framework that organizations have traditionally lacked to establish, define and enforce and now with the emergence of more complex technology there are now more assets and devices that needs to be included into that ConMon Strategy – start by locating where the data and assets reside and prioritize based off of criticality – once those assets are defined and we have conducted a business impact analyses – now we need to define the process of how the security monitoring tools (Splunk, Tenable, Bitdefender, Spiceworks, Palo Alto Twistlock, container scanning, source code scanning, etc). Most agencies and cloud service providers (CSPs) have adopted the NIST 800-137: Assessing Information Security Continuous Monitoring (ISCM) Programs: Developing an ISCM Program Assessment – which defines a way to conduct ISCM program assessments by using assessment procedures defined in the companion document containing the ISCM Program Assessment Element Catalog and designed to produce a repeatable assessment process. Continuously update patches on your OS, db, networking devices, any 3rd party software, to ensure there is a complete common operational picture (COP) of the regulatory environment.

Cybersecurity is accelerating where it was always going to go eventually – which is distributed computing model, zero-trust environments and remote employment. A great deal of the population is still working remotely and has access to information anywhere at any time – thus it is critical that we take care of the “low-hanging fruit” and prioritize security in all aspects of the business. 

Share This Post

Stay up to date with the latest news.