Skip to content

How to Manage Privileges in Endpoints?

If you are running an organization, you should be concerned with managing endpoint privileges to ensure devices such as laptops, smartphones, and tablets do not pose a threat to the cybersecurity of your company.

In this sense, one can use a PAM solution to support privilege management and avoid risks when not implementing the principle of least privilege.

In this article, we explain how this works and how GO Endpoint Manager can help you. To facilitate your understanding, we divided our text into topics. They are:

  • What is Endpoint Privilege Management?
  • How does a PAM Solution Support Privilege Management?
  • GO Endpoint Manager as a Solution for Managing Privileges in Endpoints
  • About senhasegura

Enjoy the reading!

What is Endpoint Privilege Management?

Endpoint privilege management combines application controls and privilege management and enables a company’s employees to have enough access to perform their activities without having full entitlements to the IT system.

Through endpoint privilege management (EPM) technologies, professionals have access only to trusted applications and companies are able to remove local administrator access with little impact on end users.
In practice, we are referring to the implementation of the principle of least privilege, according to which employees receive only the necessary permissions to perform their tasks.

How does a PAM Solution Support Privilege Management?

Privileged Access Management (PAM) consists of a set of information security strategies and technologies that aim to protect accounts by controlling privileged access and permissions for users and reducing risks of external attacks as well as insider threats.

With its evolution, Gartner included two classifications that describe different PAM solution approaches. They are: Privileged Account and Session Management (PASM) and Privileged Elevation and Delegation Management (PEDM), which is nothing more than the endpoint privilege management.

The focus of PEDM is to provide more specific access controls than those provided by PASM, minimizing threats generated by excessive privileges. PASM is based on more basic methods to protect access, such as the use of passwords.

To gain access, machines and users check administrator accounts that have full or no access privileges.
With PEDM solutions, one can grant only the necessary access for the performance of certain tasks. Moreover, access can be limited to a specific time.

At the end of a session, privileges are revoked and if credentials are compromised, attackers will not be able to persist in their actions.

PASM associated with PEDM makes it possible to control the privileges of administrator accounts, consequently reducing insider and external threats.

Another important function of PEDM tools is to allow administrators to request new roles to obtain the necessary permissions to perform tasks so that privileges are assigned through a flexible approach.
In addition, they help organizations to comply with some criteria, as they often provide reports as well as monitoring capabilities.

GO Endpoint Manager as a Solution for Managing Privileges in Endpoints

GO Endpoint Manager is senhasegura’s PEDM solution. This tool is used to control the delegation of privileges to Windows and Linux-based endpoints, including Internet of Things devices and other wireless devices for corporate networks.

Through this feature, endpoints can be brought into compliance with the security standards of cybersecurity organizations and regulations, such as NIST, CIS Controls, and ISO 27001.

About senhasegura

We, from senhasegura, are part of MT4 Tecnologia, a group of companies focused on information security founded in 2001 and operating in more than 50 countries.

We propose to guarantee digital sovereignty and information security to our clients, granting control of privileged actions and data, and avoiding theft and leaks of information.

For this, we follow the lifecycle of privileged access management through machine automation, before, during, and after accesses. We also seek to:

  • Prevent companies from suffering interruptions in their operations;
  • Automatically audit the use of privileges;
  • Automatically audit privileged changes to detect privilege abuse;
  • Provide advanced PAM solutions;
  • Reduce cyber risks;
  • Bring organizations into compliance with audit criteria and standards such as HIPAA, PCI DSS, ISO 27001, and Sarbanes-Oxley.

Conclusion

By reading this article, you saw that:

  • Endpoint privilege management allows employees of a company to have enough access to perform their activities, without having full entitlements over the IT system;
  • PAM has two complementary approaches to protect accounts, namely: Privileged Account and Session Management (PASM) and Privileged Elevation and Delegation Management (PEDM);
  • GO Endpoint Manager is senhasegura’s PEDM solution. This tool is used to control the delegation of privileges to endpoints.

Was this article helpful to you? So, share our text with someone who might be interested in the topic.

About Version 2 Digital

Version 2 Digital is one of the most dynamic IT companies in Asia. The company distributes a wide range of IT products across various areas including cyber security, cloud, data protection, end points, infrastructures, system monitoring, storage, networking, business productivity and communication products.

Through an extensive network of channels, point of sales, resellers, and partnership companies, Version 2 offers quality products and services which are highly acclaimed in the market. Its customers cover a wide spectrum which include Global 1000 enterprises, regional listed companies, different vertical industries, public utilities, Government, a vast number of successful SMEs, and consumers in various Asian cities.

About Segura®
Segura® strive to ensure the sovereignty of companies over actions and privileged information. To this end, we work against data theft through traceability of administrator actions on networks, servers, databases and a multitude of devices. In addition, we pursue compliance with auditing requirements and the most demanding standards, including PCI DSS, Sarbanes-Oxley, ISO 27001 and HIPAA.

ESET 於 Canalys 全球安全領導力矩陣 2022 (Global Security Leadership Matrix)獲評為冠軍

2022 年 12 月 20日 – 全球數碼安全領域領導者 ESET 宣布,於 Canalys 全球安全領導力矩陣 (Global Security Leadership Matrix)2022 中排名第一。Canalys 是一家領先的全球技術市場分析公司,通過既定渠道計劃,對領先的網絡安全供應商進行全面評估。ESET 持續的收入增長,包括 MSP 部分增長了 30%,加上其帳戶管理質量和整體經營便利性,在其合作夥伴中保持了高度評價。

ESET 合作夥伴計劃重視建立長期關係,這有助夥伴持續提供寶貴意見。現時 ESET 的銷售網絡由超過 10,000 個 MSP 和 24,000 個經銷商組成,MSP 仍然是戰略的核心部分。通過 MSP 提供 XDR 解決方案(包括 ESET Inspect 和 ESET Inspect Cloud)以鞏固戰略執行,這些解決方案以前僅向企業帳戶提供。

30 多年來,ESET 一直致力投資多層專有技術,以防止客戶的端點和系統受到已知和未見威脅的破壞。ESET 商務總監 Ignacio Sbampato 表示:「我們的目標是提供數碼安全,使企業的系統能夠平穩、安全地運行。 我們相信,我們為合作夥伴提供了最精密的多層技術,使他們能夠專注於業務進展這個最重要的事情上。」

作為一家專注於技術的私營公司,ESET 始終採取以科學為基礎、安全第一的方法,早已採用機器學習和雲端計算能力來開發全球威脅情報系統。ESET 商業解決方案連續備評為行業冠軍、頂級玩家和領導者。

關於Version 2

Version 2 Digital 是立足亞洲的增值代理商及IT開發者。公司在網絡安全、雲端、數據保護、終端設備、基礎設施、系統監控、存儲、網絡管理、商業生產力和通信產品等各個領域代理發展各種 IT 產品。透過公司龐大的網絡、通路、銷售點、分銷商及合作夥伴,Version 2 提供廣被市場讚賞的產品及服務。Version 2 的銷售網絡包括台灣、香港、澳門、中國大陸、新加坡、馬來西亞等各亞太地區,客戶來自各行各業,包括全球 1000 大跨國企業、上市公司、公用事業、醫療、金融、教育機構、政府部門、無數成功的中小企及來自亞洲各城市的消費市場客戶。

關於ESET
ESET成立於1992年,是一家面向企業與個人用戶的全球性的電腦安全軟件提供商,其獲獎產品 — NOD32防病毒軟件系統,能夠針對各種已知或未知病毒、間諜軟件 (spyware)、rootkits和其他惡意軟件為電腦系統提供實時保護。ESET NOD32佔用 系統資源最少,偵測速度最快,可以提供最有效的保護,並且比其他任何防病毒產品獲得了更多的Virus Bulletin 100獎項。ESET連續五年被評為“德勤高科技快速成長500 強”(Deloitte’s Technology Fast 500)公司,擁有廣泛的合作夥伴網絡,包括佳能、戴爾、微軟等國際知名公司,在布拉迪斯拉發(斯洛伐克)、布裏斯托爾(英國 )、布宜諾斯艾利斯(阿根廷)、布拉格(捷克)、聖地亞哥(美國)等地均設有辦事處,代理機構覆蓋全球超過100個國家。

Backup Strategy and the 3-2-1 Principle

Data loss comes in all sizes: small (individual files), medium (SharePoint site), and large (ransomware and disaster recovery). No matter the size of the loss of data, none of them are fun, and even the smallest of data loss events could leave you lacking your most critical data. That one spreadsheet or that one hard disk drive could have what you and your business rely on most – it’s not always something someone can “just create again” on a whim as data loss is indiscriminate in its impact. All data loss events negatively impact workflow, and all are risk and data protection concerns that ultimately are a business imperative. Proactive data protection through backup and data management is at the forefront of all of our minds—or at least should be. Now why is that? Years ago, the assumption prevailed that cloud services would “take care of everything” once you signed up for a cloud service, with backup being lumped in. But now, more than ever, as the awareness of shared responsibility models for SaaS applications grows which states it is the user who is responsible, it’s clear the onus is on you to have that backup strategy in place. That’s why the 3-2-1 backup rule—a principle established for on-premises infrastructure which requires multiple copies of backup data on different devices and in separate locations—is still relevant to today’s cloud-based infrastructures by providing essential data-protection guidelines.

Why Back Up Cloud SaaS Data, and Why Now?

Your data is critical to your business operations, and in many cases, maintaining control of and access to it is required by law. (Read more about how third-party security keeps companies in control of their data here.)

SaaS Shared Responsibility Model

Software-as-a-service providers have established documentation that clarifies the areas of responsibilities they have and also those responsibilities that are retained by the customer. Microsoft, well known for its Microsoft 365 SaaS offering, delineates the boundaries of shared responsibility in the cloud. While Microsoft does provide some degree of data protection, many people are not aware of the limitations of this protection. The short of it is that Microsoft does not provide suitable backup and restore functionality to customers. Learn more about why your M365 is not backed up (and how to fix it) in our in-depth article here.
And it’s not only Microsoft that has a shared responsibility for their SaaS services. Google (and backup files to Google drive) has what they refer to, almost ominously, as “shared fate” on Google cloud shared responsibilities. Likewise, Amazon Web Services (AWS) have their own shared responsibility model. It’s vital customers know and understand the extent of their agreement.

Risks to Data Security

In the days of on-premises backup, the only credible risks were acts of mother nature and hardware failure. That is, of course, if you ignore software issues. Lots of software (from firmware on RAID adapters to drivers to operating system filesystem implementations and the user applications) problems would cause data loss and a need for restore, from system level down to file level. (That’s one thing I don’t miss about the ‘90s.) However, in the cloud-computing era, the risks have evolved as much as the ways in which we create, share, and store data, so things are much more complicated now. With both the prevalence and penetration of ransomware, cybercrime, and not to mention the increased access users have in order to streamline collaboration interactions and boost productivity, data—the lifeblood of a company—has, in many ways, never been more susceptible to data loss, regardless of whether it’s international (malicious actors, ransomware, etc.) or unintentional (human error, accidental deletion). Sometimes going back to basics can be the place to start in developing or hardening security.

3-2-1 Backup Method

The 3-2-1 principle comes from the days of on-premises data storage. It is still commonly referenced today in the modern, cloud-computing area. Even though it isn’t directly applicable, word for word, to cloud data, this well-known and widely used principle can still be used today to guide security decision makers in their process of improving their security infrastructure against today’s data risks.
Roughly speaking, the 3-2-1 backup rule requires 3 copies of data, across two types of storage media, with one off-site copy stored.

What Is the Origin of the 3-2-1 Rule?

Backup and recovery solutions have existed since long before cloud computing. However, the methodologies have shifted due to the modernization of the infrastructures, behaviors, needs, and of course a lot more variables (but we won’t get into that here), which has resulted in some discrepancies between best-practice principles and their application to modern data infrastructures. This is also the case with the 3-2-1 backup rule, with the biggest change being the shift of how data is created and stored (or rather where). Formerly, production data was created on site and stored in on-premises hardware, alongside one backup copy, and the third being stored off premises and typically on tapes. ComputerWeekly has a feature on if the cloud has made 3-2-1 obsolete. In the cloud era, data is created in numerous places by remote workers in SaaS applications, where it is often transferred around the globe, and is stored “somewhere else” from a business’s physical office. More than likely, the extent of an answer to the question of “where is your data stored” is that it’s in the cloud. But is that backup? And what is true backup in the cloud?

How Does the Rule Apply to Cloud Backup?

We often see iterations of this backup principle in fancy infographics that almost forget to translate the rules to apply to the current scenarios. However, with a few tweaks, there’s plenty of relevant guidance that can help lead to a successful, modern, data security system.
Let’s look at the rules with a modern lens:

3 Copies of Your Data

The ‘3’ in the rule refers to the number of “copies of your data,” with one being the primary dataset in the production environment while the remaining two copies are backups. This is still applicable to modern data protection best practices.

2 Administrative Domains

As mentioned, the ‘2’ can be understood as “two administrative domains” so that copies are managed independently from the other or are stored within separate logical environments. You often see this written as “two types of media,” which is a relic from the on-prem past when it was made up of disks and tapes. Now, it’s about having copies across multiple disks and across two administrative domains so that one data-loss event cannot possibly—or is extremely unlikely to—impact all copies of the data. This is known as a logical gap. Without it, should there be a cloud-wide compromise (such as a breach) or data loss event of the cloud where your primary data lives, your data would not be available to you. One of the best-known examples of this is the Danish shipping giant Maersk and the infamous NotPetya cyberattack, dubbed “the most devastating cyberattack in history” in the full Wired story here. When working “in” the cloud, the building you are in isn’t of any real consequence to the data. Rather, it’s the cloud you are working in and storing data in that matters. In many regards, this step could envelop the step below, “1 copy external,” but in respect to the principle, it serves us here to keep it a separate consideration. Should there be a cloud-wide compromise or data loss event of the cloud where your primary data lives, your data would still be available to you by following the rule. Without doing so, you’ve lost access to your data (or even lost your data permanently), with an impact that has a massive potential for business disruption and costs (as in the case of Maersk).

1 Copy External

Formerly the ‘1 off-site storage copy,’ this still applies for the same reasons as it did in the past: You don’t want to store all of your data in the same exact location, and whether all are aware or not, the cloud is located in physical data centers. From the on-premises days, this meant literally having a copy of disks and/or tapes in a different location from your business in case someone, something, or some event with the power to destroy the building did so. Let’s call this the “in case of fire” step. In cloud computing, this means having a backup copy outside the cloud of the production environment and outside the administrative domain of the other backup. Remember, the cloud is ‘just’ physical data centers, so by working in the cloud, the centers you are storing your data in are of real importance to the data. What if the data center of the cloud you are working in is also the same data center that your backup cloud data is stored in? Should there be a data loss event at that center, all of your data would be at risk from that event. That’s bad.

Use Case: What would this look like in real life?

If, for example, you are working on a Microsoft Word document and you save it to OneDrive that has OneDrive Backup turned on, you’re totally protected, because it says “backup,” right? This is an example where the 3-2-1 principle still helps shed light on modern data protection in the cloud. By following the 3-2-1 rule above, one can deduct that this example isn’t backup (but neither is a lot of what SaaS providers offer as ‘backup’) because true backup requires a logical infrastructure separate from the primary data. As the “in case of fire” step requires, you must have one copy outside of the administrative domain. By working in and backing up OneDrive data to Microsoft’s cloud services, the data remains in the same administrative domain. What if something were to happen to Microsoft servers? You’d lose access to your primary data and the copies “backed up” since they all relied on the same cloud. What’s even worse is that since the backup is configured by “you” (i.e., the admin), a compromise of your account can unconfigure it, too. So, a simple case of ransomware could completely and automatically disable or work around such in-service protections—even leading to immediate backup data deletion. Keepit, on the other hand – aside from being separate (and therefore unlikely to be compromised at the same time by the same mechanism), as a dedicated backup solution – will actually protect even the administrator from quickly or immediately deleting backup data. In this respect, Keepit offers some of the most desirable features of “the tape in an off-site vault” in a modern cloud service solution.

Here’s how to use the 3-2-1 backup rule to ensure you’re covered: Independent cloud

If you’re interested in further reading, check out our e-Guide on SaaS data security for a thorough look into leading SaaS data security methodologies and how companies can raise the bar for their data protection in the cloud era. Convinced you need backup, but want to know more about data protection and management for your particular SaaS application, then explore how Keepit offers cloud data backup coverage for the main SaaS applications here.

About Version 2 Digital

Version 2 Digital is one of the most dynamic IT companies in Asia. The company distributes a wide range of IT products across various areas including cyber security, cloud, data protection, end points, infrastructures, system monitoring, storage, networking, business productivity and communication products.

Through an extensive network of channels, point of sales, resellers, and partnership companies, Version 2 offers quality products and services which are highly acclaimed in the market. Its customers cover a wide spectrum which include Global 1000 enterprises, regional listed companies, different vertical industries, public utilities, Government, a vast number of successful SMEs, and consumers in various Asian cities.

About Keepit
At Keepit, we believe in a digital future where all software is delivered as a service. Keepit’s mission is to protect data in the cloud Keepit is a software company specializing in Cloud-to-Cloud data backup and recovery. Deriving from +20 year experience in building best-in-class data protection and hosting services, Keepit is pioneering the way to secure and protect cloud data at scale.

Cyber Kill Chain

Intro

This is an important concept and I want to provide you with a quick overview of what are kill chains, what is threat modelling, why we do these things, and why we need them. This understanding is crucial in creating a stable and strong security posture.

One other thing to note is that all these frameworks are generally made to complement other frameworks. For example, the UKC – Unified Kill Chain – is made to be a complement to MITRE.

Cyber Kill Chain – what is it?

This term is a military term/concept that relates to an attack, in particular its structure. Lockheed Martin (security and aerospace company) is the one that established the Cyber Kill Chain in 2011, based on the aforementioned military concept. The idea of the framework is to define the steps adversaries are taking when attacking your organization. In theory, to be successful, the adversary would pass all the phases within the Kill Chain.

Our goal here is to understand what the Kill Chain means from an attacker’s perspective, so that we can put up our defences in place and either pre-emptively mitigate that, or disrupt their attacks.

Why do we need to understand the (Cyber) Kill Chain?

Understanding of the Cyber Kill Chain can help you protect against myriad of attacks, like ransomware, for example. It can also help you understand in what ways do APTs operate. Through this understanding, as a SOC Analyst, or Incident Responder, you can potentially understand the attacker’s goals and objectives by comparing it to the Kill Chain. It can also be used to find those gaps and remediate on missing controls.

The attack phases within the Cyber Kill Chain are:

  • Reconnaissance
  • Weaponization
  • Delivery
  • Exploitation
  • Installation
  • Command & Control
  • Actions on Objectives (Exfiltration)

Reconnaissance

As we all know, this means searching for and collecting the information about system(s). In this phase, our adversaries are doing their planning. This is also where OSINT comes in, and is usually the first step an adversary will take, before going further down the chain. They will try and collect any possible piece of info on our organization, employees, emails, phones, you name it.

This can be done, for example, through email harvesting – process of collecting email addresses from online (public, paid, or free) services. These can further be used for a phishing campaign, or anything else.

Within the recon step, they also might collect the socials of the org’s employees, especially if some employee in particular might seem of interest or is a bit more of an easier target. All this information goes into the mix and is nothing new. First step – Recon.

Weaponization

After the initial recon phase, our adversary goes on to create their weapons! Usually, this entails some sort of malware/exploit combination bundled into a payload of sorts. Some adversaries will straight up buy the malware on the Darkweb marketplaces, but some more sophisticated attackers, as well as the APTs, will usually write their own malware. This is advantageous as it might actually evade your detection systems.

They can go on about this in numerous ways, but some examples might include them creating a malicious MS Office document with bad macros or VBA scripts, they could also use Command & Control techniques so that your affected machine calls to the Command server for more of those malicious payloads. (Yikes!) Or, they could add a backdoor, some other type of malware, or anything really.

Delivery

This step entails the attacker choosing a way to deliver the payload/malware to their victim. There are many options here, but in general, the most used one is good old phishing email.

With a phishing email that’s sent after the successfully completed reconnaissance phase, the attacker can target a specific person (spearphishing), or a group of employees at your organization. Within the email would be the embedded payload.

Other ways to distribute the payload may include the attackers planting infected USBs in public places, like parking lots, the streets, etc. Or, they could use a so-called watering hole attack which basically aims at a specific group of people by sending them to an attacker controlled website, by redirecting them to that site off a site they generally use but is now compromised by the attacker.

The attacker exploits the website, and then somehow tries to get the unsuspecting users to browse to the site where the victim basically downloads the malware/payload unintentionally.

Exploitation

Before finally getting access to our systems, the attackers need to carry out an actual exploit. Suppose the previous steps worked and the user downloaded or somehow ran the malicious payload, the attacker is ready for the next steps… or whatever’s in between! They can try to move laterally, get to your server, escalate privileges, anything goes.

This step boils down to

  • Victim opening the malicious file, thus triggering the exploitation
  • The adversary exploits our systems through a server, or some other way
  • They use a 0 day exploit

Whatever the vector, it comes down to them exploiting our systems and gaining access.

Installation

This step comes after the exploitation and it usually pertains to the adversary trying to keep a connection of sorts to our system. This can be achieved in many ways, for example, they might try to install the backdoor on the compromised machine, or they could modify our services, they could also install a web shell on the webserver, or anything else that helps them achieve persistence. This is key for the Installation phase.

The persistent backdoor is what will let the attacker interact and access our systems that were already compromised.

In this phase, they also might try to cover their tracks from your blue team by trying to make the malware look as if it was a legitimate app/program.

Command & Control

The Command & Control, also known as C2, C2 beaconing, or C&C is the penultimate part, and this is where the adversary uses the previously installed malware on the victim’s device to control the device remotely. This is usually built into the malware itself, and it has some sort of logic through which it calls back home to its Control server.

As the infected device calls back to the C2 server, the adversary now has full control over the compromised device. Remotely!

The most used C2 channels these days are:

  • HTTP protocol 80, HTTPS port 443 – HTTPS is interesting as it can help hide within the encrypted stream and it can potentially help the malicious traffic evade firewalls
  • DNS – the infected host will make constant DNS queries calling to its C2 server

An interesting fact – In the past, adversaries used IRC to send C2 traffic (beaconing), but nowadays it’s became obsolete as this type of bad traffic is much more easily detectable by the modern/current security solutions.

Actions on Objectives (Exfiltration)

This is your exfiltration (or exfil) step, where the adversary tries to gather all the goodies they just stole, so, user credentials, messing with the backups and/or shadow copies, corrupt/delete/overwrite/exfiltrate data. They can also escalate to domain admin, for the keys to the kingdom, or move laterally through the organization. They could also try to find vulnerabilities of your software internally, and much more.

This step depends on their specific goals/objectives, and is where all the action will happen, thus actions on objectives.

Conclusion

I hope I’ve managed to give a brief overview of this incredibly important concept. I will cover it a bit more in the future, but for now, I felt like this was a good (traditional) start. I do hope to cover the Unified Kill Chain soon, though. So – stay tuned!

Ps: you’ll notice there are a couple of other frameworks/variations aside from the Cyber Kill Chain, and I will try to explain the distinctions. Just remember these are models/methodologies and there’s no silver bullet. They should be used in conjunction with other security controls.

Image source – https://www.lockheedmartin.com/en-us/capabilities/cyber/cyber-kill-chain.html

Cover image by Linus Sandvide

#kill-chain #cyber #C2 #threat-modelling

About Version 2 Digital

Version 2 Digital is one of the most dynamic IT companies in Asia. The company distributes a wide range of IT products across various areas including cyber security, cloud, data protection, end points, infrastructures, system monitoring, storage, networking, business productivity and communication products.

Through an extensive network of channels, point of sales, resellers, and partnership companies, Version 2 offers quality products and services which are highly acclaimed in the market. Its customers cover a wide spectrum which include Global 1000 enterprises, regional listed companies, different vertical industries, public utilities, Government, a vast number of successful SMEs, and consumers in various Asian cities.

About VRX
VRX is a consolidated vulnerability management platform that protects assets in real time. Its rich, integrated features efficiently pinpoint and remediate the largest risks to your cyber infrastructure. Resolve the most pressing threats with efficient automation features and precise contextual analysis.

×

Hello!

Click one of our contacts below to chat on WhatsApp

×