In today’s cybersecurity environment—with new types of incidents and threat vectors constantly emerging—organizations can’t afford to sit back and wait to be attacked. They need to be proactive and on the offensive when it comes to defending their networks, systems, and data.

It’s important to understand that launching an offensive cybersecurity strategy does not mean abandoning traditional defensive measures such as deploying firewalls, intrusion detection systems (IDS), anti-malware software, patch management, security information and event management (SIEM), and other such tools.

Going on the offensive with cybersecurity involves taking extra steps to preemptively identify weaknesses before bad actors can take advantage of them. It means thinking like they do and anticipating their moves. While the idea of taking a proactive approach to security is not new, it has taken on greater significance given the level of risk so many organizations face today.

Threat hunting strategy

One of the most effective ways to be proactive with security is to deploy a threat-hunting strategy. Cyber threat hunting is a proactive defense initiative in which security teams search through their networks to find and isolate advanced threats that evade existing security tools.

Whereas traditional solutions such as firewalls and IDS generally involve investigating evidence-based data after an organization has received a warning of a possible threat, threat hunting means going out to look for threats before they even materialize.

Gain visibility

Several key components make up the foundation of a strong threat-hunting program. The first is the ability to maintain a complete, real-time picture of the organization’s environment so that threats have no place in which to hide. If the security team is not able to see the threats within their organization’s environment, how can it take the necessary steps to stop them?

Having the kind of visibility that’s needed can be a challenge for many organizations. The typical IT infrastructure today is made up of diverse, dynamic, and distributed endpoints that create a complex environment in which threat vectors can easily stay out of sight for weeks or even months.

That’s why an organization needs technology that allows it to locate each endpoint in its environment and know if it’s local, remote or in the cloud; identify active users, network connections, and other data for each of the endpoints; visualize lateral movement paths attackers can traverse to access valuable targets; and verify whether policies are set on each of the endpoints so they can identify any gaps.

Proactively hunt for threats

The second key component of threat hunting is the ability to proactively hunt for known or unknown threats across the environment within a matter of seconds. Security teams need to know if there are active threats already in the environment.

They need to be able to search for new, unknown threats that signature-based endpoint tools miss; hunt for threats directly on endpoints, rather than through partial logs; investigate individual endpoints as well as the entire environment within minutes without creating a strain on network performance; and determine the root causes of any incidents experienced on any endpoint devices within the environment.

Remediating threats

The third foundational component of threat hunting is the ability to respond to and resolve any threats that the security team finds within the same unified platform. Finding a threat is not enough—it has to be obliterated.

A threat-hunting solution should enable security teams to easily shift from threat hunting to response by using a single dataset and platform; quickly applying defensive controls to endpoints during an incident; learning from incidents and, through this knowledge, hardening the environment to prevent similar attacks,and streamlining policy management to keep endpoints in a secure state at all times.

What to look for in a threat-hunting solution 

A key factor to look for in a threat-hunting solution is the ability to use statistical analyses to better understand whether particular incidents are notable. That can only happen when a system can enrich data telemetry in real time, at scale, and in constantly changing situations.

Security teams can leverage every log source, piece of telemetry, and bit of endpoint metadata and traffic flow in an aggregated manner to get a clear understanding of what’s going on. Threat actors will not be able to get into an organization’s environment completely undetected. It’s only a matter of whether the threat-hunting team is leveraging the right data to track them down.

It’s important for security hunting teams to have high-confidence threat intelligence and to follow the right feeds. While enriching alerts with real-time intelligence is not always easy, it’s vital for success. Teams need to work with trusted sources of data and must be able to filter the data to reduce false positives as well as false negatives.

In addition to threat hunting, organizations can leverage services such as penetration testing and threat intelligence. With penetration testing, an organization hires a service provider to launch a simulated attack against its networks and systems to evaluate security.

Such tests identify weaknesses that might enable unauthorized actors to gain access to the organization’s data. Based on the results, the security team can make any needed enhancements to address the vulnerabilities.

Cyber threat intelligence is any information about threats and threat actors that is intended to help companies mitigate potential attacks in cyberspace. Sources of the information might include open-source intelligence, social media, device log files, and others.

Over the past few years, threat intelligence has become an important component of cybersecurity strategies, because it helps organizations be more proactive in their approach and determine which threats represent the greatest risks.

By being proactive about security, organizations can be out in front of the ever-expanding threat landscape. They can help to ensure that they’re not just waiting impassively for attacks to come, but taking initiatives to stop bad actors before they can act.

Learn how a converged endpoint management platform can help CIOs keep pace with tomorrow’s threats. Check out this eBook, The cybersecurity fail-safe: Converged Endpoint Management.

Security

By Milan Shetti, CEO Rocket Software

In today’s digitalized world, customers value transparency and accessibility above all else. As a result, organizations are taking a proactive approach to provide critical content to end users at the click of a button.

For over 130 years, Hastings Mutual Insurance Company has served and protected its clients throughout the Midwest. The regional insurance agency, with nearly 600 offices and 500 employees, has provided security and peace of mind to customers of all shapes and sizes, from small personal family policies to larger insurance packages that have helped to protect farmers and businesses from the unexpected. With over $1 billion in total assets, the company has grown significantly since its humble beginnings in 1885. Still, Hastings continues to pride itself on its relationships and the care it provides its customers. That is why Hastings Mutual decided to look closely at how it managed and distributed its content to its clients.

Since the early 1980s, the company has used an in-house Policy Administration System (PAS) with what is today Rocket Software’s Mobius Content Services Platform to classify, manage, and grant access along its mainframe to more than 4,000 unique document types. Although current operations were running optimally, Hastings understood that its PAS’s lack of integration with modern technologies would eventually create issues. Hastings management decided on a proactive approach, taking on the challenge of modernizing its existing mainframe operations to an open-source environment to remain competitive in future markets. In its push to modernize, the regional insurance provider also believed updating its client viewing system to provide a more intuitive, user-friendly experience would benefit its customers and employees alike.

The challenges of preserving historical data

While migrating information from the mainframe to open source comes with its own obstacles, Hastings Mutual faced even greater challenges. The company had been developing and storing mission-critical documents and information on its old infrastructure for over three decades — including regulatory, accounting, and workflow documents. Not only would Hastings need to find a way to continue generating these documents throughout the migration process, but it was also essential to maintain the integrity of its historical documents and information during its transfer onto open-source systems. Failure to do so could lead to regulatory sanctions and even legal implications.

With limited resources and a lack of experience with mainframe migration, Hastings realized it needed help to clean up its Logical Partition (LPAR), preserve the integrity of its historical documents, and successfully downsize its mainframe operations — all while maintaining fluid operations.

Finding the right support for mainframe migration

Hastings turned to Rocket Software, whose Professional Services team got to work immediately to assist Hastings’ operational team in the clean-up of its existing LPAR environment. Together, the teams went through each historical document within the LPAR to rename and properly segment it for migration to the correct open-source system. 

Once documents were properly classified and stored within the LPAR ecosystem, Hastings turned its attention to mainframe migration. Hastings was able to modernize its mainframe operations while still utilizing its PAS in conjunction with Mobius Content Services to generate critical documents on its mainframe. After generation, the documents were automatically duplicated and safely transferred to the proper open-source environment. And Hastings was able to begin the migration of its historical documents safely and securely from the mainframe to its open-source systems. 

Improving customer experience

Hastings’ pivot to a more innovative web client has also been essential to the migration’s success and the company’s growing customer satisfaction. Now, end users can access Hastings’ digitized documents with the click of a button — reducing document latency and making high-priority documents available within seconds rather than minutes. And having an intuitive open-source viewing system has empowered Hastings’ end users to find critical information faster and without the hassle of asking for assistance.

The benefits of great partnership

As a result of the project, Hastings Mutual continues to successfully move toward a hybrid open-source infrastructure. The company was able to modernize its operations to produce, store, and distribute documents to its clients faster, more securely, and at a lower cost.

Throughout the migration process, Hastings has not missed a beat. As a regional insurance provider, the ability to continue to provide outstanding service to clients when they need it the most has been pivotal.

As Mainframe experts, Rocket Software helps businesses avoid complications and enhance the management and security of their most critical information. To learn more about our suite of Mobius products, click here.

Digital Transformation

Cyber attackers worldwide are displaying an increasing level of sophistication. This is a major issue for Australian CISOs and their teams who often lack the resources required to deal with more frequent and complex attacks by well-resourced cyber criminals.

At the same time, legacy security operations centres (SOCs) are dealing with an unmanageable volume of alerts. This leads to ‘alert fatigue’ that slows key processes down and makes it easier to miss potentially significant issues that could be buried in the noise. Hiring an army of security engineers to deal with these challenges is also expensive and doesn’t scale.

SOCs are also using too many security products (the average company may have dozens of cyber security products deployed), and many rely on manual processes for daily operations as well as dealing with incidents. Far too many menial tasks require significant human interaction and toil that can be mind-numbing.

Senior technology executives gathered recently for a discussion about the ways they can move from a reactive to a more proactive cyber security environment. The conversation was supported by Palo Alto Networks.

Attendees were initially asked how they ensure security posture consistency that prevents sensitive data loss and malware across all traffic flows regardless of where the user is working or the apps they access.

Leonard Kleinman, Cortex Chief Technology Officer at Palo Alto Networks, advises attendees that the starting point to achieve a reasonable security posture is to have visibility into all aspects of the operational environment.

“After all, you cannot protect what you cannot see or do not know about. But the approach would be to strive for visibility or telemetry from all sources. These include the network, endpoints, and cloud, irrespective of location, identity or device.

“Such a unified platform provides immense flexibility to achieve various objectives related to, for instance, regulatory compliance and governance, incident response, and data loss prevention. The more sources, the richer the telemetry, the better the context. This permits faster and more informed decisions for detection and response,” he says.

Ian Palmer, head of ITDS at UTS College, says the education provider’s cyber security posture is based upon the risk to data by access and use.

Presently, applications that hold personal data are secured by the organisation’s firewalls with any user needing access using a UTS College laptop that has a VPN back to the firewalls.

“This provides us with protection no matter where the user is working as accessing our devices have multi-factor authentication (MFA), depending on the risk factors presented. All traffic, including internet traffic, is through the firewalls, but we don’t see any degradation of service with excellent bandwidth,” he says.

Nabil Saleh, chief information officer at Woollahra Municipal Council, says that his organisation maintains a consistent security posture by not allowing staff to bring their own devices to work and provides them with managed devices that have VPN access. This prohibits split tunnelling to ensure that all traffic is contained and encrypted, he says.

“VPN access provides a centralised standard operating environment that is the same, regardless of location. The devices have XDR endpoint security to ensure compliance with our security policies.

“In relation to sensitive data leakage, as opposed to loss, it can happen regardless of the controls that are in place and is reliant on the user’s diligence in protecting the data from unauthorised access,” he says.

Ashwani Ram, general manager, cyber security infrastructure and operations at Chartered Accountants Australia and New Zealand, believes that malware, for instance, is easier to take care of these days due to the bundling of EDR and XDR tools with managed security operations centre (SOC) services.

“Of course, you need to overlay this with intelligence EDR/XDR, and DNS security so that users have less chance of being diverted to suspicious sites in the first instance.

“Zero trust application access and web browsing platforms with DNS threat management and web security provide secure VPN services. This means users can get out of the house and work from their favourite cafeteria and be productive – which is how we need to re-brand and sell endpoint security,” he says.

Sensitive data loss is a more difficult and complex problem, Ram adds.

“Before we can prevent data loss, we need to first be able to monitor data at all stages from creation to destruction. Once we better understand this cycle and usage, we need to take a two-pronged approach – education and tooling.

“Just like we say that people are the best firewalls, this is also the case when it comes to preventing data loss,” he says.

Hybrid working introduces new risks

Some attendees said they had reviewed their risk models as workers transition away from the office to their homes and other remote locations that are outside their network perimeters.

UTS College’s Palmer says the organisation has undertaken internal risk reviews and external audits of its cyber security posture to ensure that risk can be managed where there’s no network perimeter.

“We are trying to move towards a zero-trust model and have implemented major capabilities to ensure we are protected by layers of security,” Palmer says.

Woollahra Municipal Council’s Saleh says the organisation has done a risk assessment on working remotely and have educated staff through cyber awareness training on the ‘dos and don’ts’ of remote working. Remote access at the council also complies with ACSC’s Essential Eight Maturity Model, he says.

Palo Alto’s Kleinman adds that risk management is a dynamic paradigm, and it is constantly evolving.

“The reality is that risk in business can never be truly eliminated, but identifying and minimising risk can be significantly beneficial,” he says.

The transition to ‘work from anywhere’ is a great example of the dynamic and reflective kind of risks enterprises face as their businesses grow, develop and respond to remain competitive, he says.

When it comes to reassessing risk, Kleinman suggests that organisations need to start by asking, ‘what are the objectives and what are the risks that will impact the organisation’s ability to meet these objectives?’

“Regularly reviewing the risk model and risk management plan is essential for identifying new risks, developing new treatment plans and then monitoring their effectiveness,” he says.

A voice at the boardroom table

There’s no doubt that in recent years, company boards have become more aware of the risks to their organisations from cyber-attacks, as well as their potential liabilities following a breach.

Kleinman agrees that the main change in recent times is clearly the level of accountability and responsibility of boards for cyber-related risk, much of it stemming from the increase in new regulations and legislation.

“There’s a preponderance of data that supports the position and most board members are acutely aware of this. However, many board members still see cyber as a black box with cyber literacy and experience sadly lacking,” he says.

Kleinman says that a recent study on the cyber security skills of company directors in the ASX 100 found that only one per cent of non-executive directors responsible for overall governance and strategic direction had any cyber experience.

“I believe the conversation needs to shift from one focused on ‘how do we become compliant’ to one about understanding the business’ objectives and the risks that will impact on an organisation’s ability to achieve those objectives.

“History has shown us that simply being compliant does not mean being secure. Assuming that a quality CISO has access to the board or sits on the board, they should be focused on having the right conversation around cyber risk to ensure it is integrated into the wider enterprise risk management program and other corporate governance activities,” he says.

He adds that boards need to ensure they are having frequent conversations with the CISO by continuously reviewing the state of cyber security across the business.

“For example, lessons learnt from security incidents are invaluable in addressing the gaps and updating response plans. However, I also believe that addressing the cyber knowledge/experience at the board level would be a better augmentation to board composition than just relying on the CISO.”

Chartered Accountants Australia and New Zealand’s Ram, adds that unfortunately, the CISO only has a voice at the boardroom table through the CIO.

This is changing slowly, he says.

“I think boards have realised that they need to understand cyber security, but they are struggling to comprehend it. However, in their defence, I think CISOs also need to get better at translating risks into business terms and pitching it in the language that the board is familiar with and understands.

“I also think that there is an opportunity for the enterprise risk management team to better interface with the cyber security team to help translate cyber risks into business risks at a strategic and operational level. I believe that once this interface gets better, we will be in a better position to help the board understand cyber risks,” he says.

UTS College’s Palmer says that over the past two years, the organisation’s board has become aware of the personal liability they now hold in the event of a breach.

“The major firms have been talking to the board to make them understand the impact of cyber, so it has provided more visibility,” he says.

Palmer reports to UTS’ Audit and Risk Committee (ARC) on cyber security on a quarterly basis and questioned over any perceived risks or threats.

“Also, external audits are undertaken regularly by independent organisations to ensure we are covering risks that are provided directly to ARC. Having the ex-CIO for UTS on our board and ARC has created more awareness and a deeper understanding [of cyber issues],” he says.

Woollahra Municipal Council’s Saleh says that having successfully enabled remote working from day one of the pandemic, the board recognises the value that IT offers to business in tines of crisis and, to some extent, the associated risks.

“Through board and executive leadership team awareness training, all members are more cognisant of cyber security risks than before. Also, there was a cyber security incident a few months ago that affected a similar organisation and made it into the media. As a result, our council is very well aware of the reputational damage that a cyber incident can cause the hence, pay enough attention to security requirements when tabled at meetings,” he says.

Cyberattacks