Sunday, April 23, 2017

Building a website without bugs

A few years ago, I wanted to build a site that hosted up-to-date information that would be pulled from a datastore and provided to clients. The first solution that comes to mind is a simple php codebase that pulls data from a sql database and dynamically displayed to the user, but as well documented for decades now, it's hard to write secure code and new attacks come out all the time to exploit webapp vulnerabilities.

The extreme avenue here, swinging heavy on the security side, is to provide static content. Generally if there's no inputs, there's no injection attacks. But especially when you want to keep content up to date, even real-time, editing HTML and manually inserting new data is impractical.

So I decided to try something slightly in the middle, which didn't sacrifice security and was much more practical. One can write a python script that sits in on the server, pulls fresh data from the database at regular intervals and then generates and places the static content created on the fly in the webroot. Writing code that writes code is often messy, so that won't be fun, but it's really a short-term sacrifice for long-term security with usability. Now obviously this isn't going to be the best option if you want users to actually be able to interact with your site (other than click links or email you), but if you just want to share information, it works great.

How do you attack that website? There's no attack surface at the webapp layer. The database server is sitting on the localhost with only the script authenticating and talking to it. Just keep your webserver and certs up to date, clean data going into the database and fundamentally, you've got a solid setup with no need to worry about vulnerabilities in the website. Welcome to the 0.0001%.

Tuesday, April 18, 2017

Exploits are not weapons

An exploit demonstrates a vulnerability, either by simply showing that a given impact is possible or adding nice features, logic and ingenuity to make exploitation more comfortable or reliable.

Notice that nothing about that description has a single thing to do with weapons, or anything being weaponized. That's why it's extremely annoying to logic-minded folks when others, some of which are actual security experts and others who couldn't spot a security bug from an oreo cookie, conflate the two. When entire industries and even countries are built on the principle that there's a difference between using something for good and evil, it becomes a problem when a fundamental topic becomes stigmatized for either lack of logical reasoning, profit to be gained or just pure anarchy.

Just as a fancy car can be used to run someone over, or just to drive to work, or how a knife is essential to half your sandwich, but can also be used to stab or stick, practically everything in the world is dual-use, for good or for evil. Banning exploits because evil can be done with them is the same logic as banning lighters or rope.

So sure, an exploit could technically be included in a weapon. We could imagine this just as easily as someone writing with a pen and only seconds later deciding to jab it into skin. There's a big difference and we can see the intent in each. Popping a shell on a box isn't a weapon. Crashing a mail server isn't a weapon. Redirecting your calls via malformed SMS message isn't a weapon unless you use it as one. And most folks are not using it in any relation to one (unless you're the government as has been well documented via years worth of leaks).

Honor the expression of free speech in code: exploits are not weapons and are generally never a component of one. Stop saying the ugly and misrepresented word weaponized in the context of computer security, because you're probably wrong. Remove it from your vocabulary if not only because it's illogical, but because it's offensive to those who have spent sweat, blood and tears staying up all night for years and years coding beautiful ways to travel unintended paths. Consider regulating obvious things which are almost always weapons, but not weird machines.

Friday, April 29, 2016

Security vs Developer: What is it that ya do?

People often take jobs to earn money.

Simplistic view, sure, but we're starting at the root here. Of course there's other factors that play a part when deciding to stay, leap or move.

Location location location!

"How much career growth in the organization would this positive provide me?"

Who says you'll like your co-workers, the company is tanking or your spouse will be happy?

All things considered, most people aren't going to work for $1/hr even if every single other attribute is overwhelmingly good.

One of the many sectors in the Information Security industry, Product Security, you're probably working for a software company and they want to minimize shipping bad quality design and code to their customers.

Why? Because customers pay the bills.

In Product Security, you of course are going to be working with developers in one scenario or another. Their job is to ship software that the customers want.

Why? See above.

Responsibilities of PS engineer are to make sure a subset of that bad quality in software, the security-relevant stuff, doesn't ship and create holes for attackers to exploit and gain access to customer's systems.

The customer's expectations are usually that of "nobody outside of who we allow to access this resource can access it". That then must be matched by the software company in order to maintain a healthy relationship.

Why? See above.

Responsibilities of a developer, or software engineer, are that of designing to spec and implementing a working product. Quality varies from company to company and is often only if the customer notices it. If they do notice it, a rush to fix it occurs.

Why? See above.

But on average, PS personnel are paid significantly more than developers. They both either work close or own the quality of the code. They both either have formal engineering backgrounds or are strmeongly suited otherwise. They likely even work for the same organization within the company and snap to similar performance levels.




Let's look at a couple other key similarities:

If the security or reliability aspects of the software's quality is down, customer loss is imminent. Both roles are responsible and affected.

If the same aspects are up, customer contracts are probably going to remain stable or even increase. Both roles are responsible and affected.

So why do PSs typically make more euros than SEs? Are they just better negotiators, master interviewees or perhaps metahumans?

"There's more of me than they are of you"

If there's a 100:1 ratio, after considering all the other company-specific factors, it makes sense.

Are you special if you work in PS and make more than the SEs down the way?

Not necessarily-- you've just found a niche in the industry like many others.

And as long as customers are paying the bills and you have skills (no pun intended), you'll continue to earn a nice living. But companies also like money-saving techniques such as automation. How long is it before we automate ourselves out of our own jobs? Questions for another time.

Choo-choo: all aboard The Singularity express.

Sunday, March 13, 2016

Programming the Weird Machine

Halvar Flake tends to put things quite beautifully, as he did near the end of a recent talk he did in Singapore per Rowhammer. At the end, he went on to explain to the audience, in the simplest terms, what exploitation actually was.

You can take a look here if you like. I've just tried to elaborate on his words.

Think of a state machine. It has states and transitions between those states. When writing software, developers tend to take the state machine in their head (or as described in the design documents) and implement it in code. They then logically account for states to which they intend.

If they make a mistake during design or implementation where they didn't or cannot account for other state transitions the software might make, and it is possible to put the program into these states, the result could be considered a bug. A security bug (eg. vulnerability) would be putting the software into a weird state that impacts the security of the system and exploitation is programming the weird machine. The weird machine being the series of new states that was found.

Transforming this into security-speak, a proof-of-concept is something that uses a bug to drive the program in a weird state, therefore proving there the concept usually describing the bug. An exploit fully programs at least one path of the weird machine to a useful destination weird state.

Here's some examples for just one bug:

- A bug which has a provided PoC that crashes that program, but is theorized for only a denial of service impact; it's weird state is driven somewhat to fruition.

- The same bug and modified PoC which another theorizes that the impact can be code execution; it's weird state is partially driven.

- An exploit is written for the bug that claims to execute an arbitrary payload; it's weird state has been fully realized in one direction.

- Another exploit is written and not only executes arbitrary code, but does so after transitioning to a state which runs privileged code; it's weird state may been fully realized at this point, unless someone else comes along and finds an even more useful transition, or it's proved there isn't one, or even if it's proved somehow the state was false.

I found these words very tasteful and an excellent mindset to have when reasoning about vulnerabilities and exploitation. The best part is that these words have the potential to used as foundations that make new ideas so more much clear and precise.

Tuesday, November 17, 2015

Why Security

How did you get into (computer) security?

I bet I've been asked this question a million times by curious acquaintances.

Or asked even more personally by non-techie family or friends: How do you learn to do stuff like that?

But those questions are easy to answer-- Simply replay the fond circumstances where you needed to be creative from memory. The more interesting question to me is this:

What do I enjoy about security?

But we're not going to go deep into that one either as I'd like to talk about my thoughts on a sub-field of security. To me, this area is the most fun and provides the platform for curiosity to overcome a thrill seeker: Bug hunting.

What is bug hunting? This is commonly described as actively looking at code for bugs. That code can be programming code or assembly or something else entirely.

For example, let's say I want to find a way to gain access to a server. The server is serving web pages with Apache web server. I know that Apache is open source, meaning anyone is free to download, read and modify their own copy of the software. If one can read and understand the programming language Apache is written in, one may start their bug hunt there.

Source code is in hand, next is identifying attack surface-- the difference places the web server parses or can be influenced by untrusted input, and so on and so on. The point is that there is a process to be followed here and many more than one. It's entirely up to the person trying to find the bug how they choose to approach it. But enough about what it is, let's ask an important question:

Why

Now why would anyone want to do it such a thing?

If you break it down to it's lowest practical level, bug hunting is the art of finding mistakes or invalidating assumptions-- more often than not, those that someone else made in code that a computer runs. Obviously everyone makes mistakes, not matter how smart they are and some of those mistakes allow for exploitation. Someone skilled enough at the keyboard can take that mistake, tie a ribbon on it and send it along like nothing happened. But something did happen and the program has a new feature: do whatever I tell you to do, which is often not what they were originally programmed to do.

CPUs, for all intents and purposes, have no distinction between right and wrong; they simply run the instructions given to them. The trick here is that mistakes in the code combined with how the CPU architectures work allow for persons (or robots, whatever) who did not write the original code, to modify it, the order in which it’s executed, or the resources it takes dependency on and change the expected outcome that the author intended.

Now, one might think, “How could anyone enjoy finding people's flaws and taking advantage of them?”. Well, now you've veered off on the wrong track. Let's get you back focused.

Bug hunters aren't finding flaws and using them against those developers in real life. Sure, programming is such a creative immersion that some might say there's a strong connection between a developer and their code. Yes, developers are often insulted when security folks find bugs in their code. But that's a whole different road to go down.

Philosophy (because I refuse to name this section “Ethics”)

Back to the question. If this work makes the people who do it seem they have debilitating personalities. Perhaps if you generalize it, they become thought of as overly-critical, arrogant or just out to make people look bad. I'm sure there are some exactly like that, but most don't fit that profile at all.

You see, the primary source of joy, excitement and achievement from finding bugs isn't about proving someone wrong or showing the world how dumb they are. It's about much more pleasant ideas.

One, the intellectual brain exercise of critical thinking.

Two, the paintbrush at your fingertips. A bug is an opportunity to create. Knowing about a bug doesn't mean you've exploited it-- once you understand it, you can write a program to exploit the program which contains the bug. And you can write the exploit any way you see fit. Really, you can write anything you want as long as it achieves the purpose. Now that's freedom.

Three, you can help fix the bug. Now, first and foremost, the bug at this point belongs to you. Sure, you can be almost certain someone somewhere else on Earth knows about it too, just because the math works out in 99% of the cases. But you are the one who decides what to do with the bug and/or exploit.

You could make the developers (or their company) aware of the bug.

You could make the community aware of the bug.

You could sell it; there's plenty of legitimate parties who perhaps have net-positive motives willing to purchase the rights to your research.

You could simply have no further ambition but sharing it with friends or keeping the details to yourself.

You could even combine two or more of the above in almost any order.

Some people believe you should not give away “work” to greedy corporations. Some think if you don't make the vendor aware of the bug, you're being irresponsible as now the wide user-base is not protected against others who may find it and exploit it maliciously. Some are still part of a scene where bugs have no monetary value. It's pointless to argue which way is the best as there's no right or wrong answer, it's completely subjective and it's logical for there to be several different values held.

Career-wise

There's so much ambiguity and context necessary when you talk about product security. “Code review” is a common way for developers to get feedback on eg. A new feature they've written by fellow colleagues. But when talking about security, a code review means reading the code to find potential security bugs.

Those who have this very mindset, which by this point you can understand is not learned, but developed (typically solo), often have an interesting challenge fitting into the team and organization they become a part of. They typically aren't traditional developers and aren't interested in learning more than they need to in order to find classes of bugs they're interested in or developing tools to find those bugs for them. They also are not traditional auditors, whose very existence is predicated on the idea that people will always make mistakes and there's no practical, non-human way to find and file them. Security folks are around, and often no longer around, due to this fact as well. They may also not come from a formal security background in academia, one exit off the Saltzer and Schroeder expressway, so the research role (as it's understood by the company) may not fit exactly either.

One may find themselves in the position to seek a well-paid job with a respectable company where they need to prove they can not only meet the bar with their development skills, but do that security thing the company hired them for exceptionally well (much better than the developers already employed there) to justify the hire.

Thoughts

10-15 years ago, the world had yet to experience the era of criminals proliferating the Internet. The hacking scene was a different groups of friends who enjoyed exchanging information and (mostly) harmlessly conquering the Internet. Now it's common for those veterans and newbies alike who have to deal with security, and all the interesting personalities that come with it, to assume that the criminals who steal, scam and cause real damage to people and businesses are “hackers”. Sure, some of them might hack. But there's still the ones who knew what the word stood for before. And there's those before us who knew it as something even different.

Are bug hunter's, hackers? Are criminals, hackers? Are employees at those HackFests, hackers (NO, and they’re insulting to real hackers who come across them)? Can anyone be a hacker once the marketing industry takes it and starts to ruin another cool word? Is the Wassenaar Arrangement the beginning of the end of complete freedom in bug hunting as we know it?

I'm not sure and nobody can be sure of anything, really. But that's what I believe, and now, you may think differently, too. Conventional wisdom is just that-- by convention, and it has it’s place. That doesn’t mean it’s the truth, though. And asking someone how they got into security can take them down the rabbit hole of creativity, self-dependence and fond memories that have yet to (and hopefully never will) completely fade away.

Monday, January 26, 2015

Linux Kernel Debugging with VMware and GDB

Intro

There are many different ways you set up debugging the Linux kernel across platforms and using physical or virtual machines, using internal options such as KDB or KGDB, which are interfaces to the kernel's debug core, are the most notable couple. I'm going to show you how to utilize impressive visualization features with VMware to set up debugging for a Linux guest VM on a Mac OS X host machine using VMware and GDB. It's fairly straightforward, so if you're just starting out in kernel land, this is a chance for you to embrace a modern approach to Linux kernel debugging.

Environment

Host: Mac OS X Yosemite x64, VMware Fusion

Guest: Ubuntu Linux 14.04 x86

Note: Most of this should also be applicable to Windows host systems running VMware Workstation.

Step 1: Configure your host

Download the iso file for your target guest machine

Create a new virtual machine and install the guest OS

Shutdown the virtual machine and edit it's VMX file (if it's not a separate file, you can right-click the VM file and"Show Package Contents".

Add the following line at the end of the file:

debugStub.listen.guest32 = 1

Save and close the VMX file and boot your guest machine.

Step 2: Configure your guest

Make sure the system is completely up to date, eg. sudo apt-get update && sudo apt-get upgrade and reboot once complete.

Then open up a terminal and do the following:

codename=$(lsb_release -c | awk  '{print $2}')

sudo tee /etc/apt/sources.list.d/ddebs.list << EOF
deb http://ddebs.ubuntu.com/ ${codename} main restricted universe multiverse
deb http://ddebs.ubuntu.com/ ${codename}-security main restricted universe multiverse
deb http://ddebs.ubuntu.com/ ${codename}-updates main restricted universe multiverse
deb http://ddebs.ubuntu.com/ ${codename}-proposed main restricted universe multiverse
EOF

sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys
ECDCAD72428D7C01
sudo apt-get install linux-image-`uname -r`-dbgsym

This will update your apt sources in order to be able to pull down dbgsym for your current running kernel. We'll use these symbols later to make debugging much more effective.

Now you should see a new file was dropped:

/usr/lib/debug/boot/vmlinux-<kernel version>-generic

Copy this file to your host machine as we'll use it for symbols in GDB soon.

Step 3: Build and Connect GDB

The VMware debugging service for the guest VM is listening via host loopback on port 8832. We can use GDB to connect to the service and debug our virtual machine.

But first, we have do a custom build which supports ELF binaries in Terminal:

(If you don't have wget, install MacPorts)
wget http://ftp.gnu.org/gnu/gdb/gdb-7.8.tar.gz
tar xf gdb-7.8.tar.gz
cd gdb-7.8
./configure --build=x86_64-apple-darwin14.0.0 --target=x86_64-vfs-linux --with-python && make
make install
 
(gdb) target remote :8832
Remote debugging using :8832 
(gdb) symbol-file vmlinux-<kernel version>-generic
Reading symbols... done

Now you should be able to see details in the stack trace (bt) and if you encounter a kernel panic, you'll drop into the debugger!

Start Debugging

If you hit ctrl+c during boot, for example, then you can verify it's working.

^C
Program received signal SIGINT, Interrupt.
0xc1147b1c in copy_page (from=<optimized out>, to=0xfffb9000)
    at /build/buildd/linux-3.13.0/arch/x86/include/asm/page_32.h:47
47    /build/buildd/linux-3.13.0/arch/x86/include/asm/page_32.h: No such file or directory.
(gdb) i r
eax            0xfffb9000    -290816
ecx            0x400    1024
edx            0x1000    4096
ebx            0x1e8    488
esp            0xedb4be48    0xedb4be48
ebp            0xedb4be68    0xedb4be68
esi            0xecbe8000    -323059712
edi            0xfffb9000    -290816
eip            0xc1147b1c    0xc1147b1c <copy_user_huge_page+76>
eflags         0x210246    [ PF ZF IF RF ID ]
cs             0x60    96
ss             0x68    104
ds             0x7b    123
es             0x7b    123
fs             0xd8    216
gs             0xe0    224
(gdb) c
Continuing.

You can also intentionally trigger a kernel panic via the Magic SysRq sub-system. Using a root shell (sudo bash, not sudo this-cmd) on the guest.

echo c > /proc/sysrq-trigger

Misc

Looks like there's guide to even use IDA's debugger for kernel remotes.

References

http://askubuntu.com/questions/197016/how-to-install-a-package-that-contains-ubuntu-kernel-debug-symbols
https://lists.ubuntu.com/archives/kernel-team/2014-July/045843.html
http://stackframe.blogspot.com/2007/04/debugging-linux-kernels-with.html
http://ntraft.com/installing-gdb-on-os-x-mavericks/
http://webcache.googleusercontent.com/search?q=cache:HSCTIWJMsXQJ:xulei.me/blog/2012/03/13/set-up-vmware-fusion-for-linux-kernel/+&cd=1&hl=en&ct=clnk&gl=us