Friday, April 29, 2016

Security vs Developer: What is it that ya do?

People often take jobs to earn money.

Simplistic view, sure, but we're starting at the root here. Of course there's other factors that play a part when deciding to stay, leap or move.

Location location location!

"How much career growth in the organization would this positive provide me?"

Who says you'll like your co-workers, the company is tanking or your spouse will be happy?

All things considered, most people aren't going to work for $1/hr even if every single other attribute is overwhelmingly good.

One of the many sectors in the Information Security industry, Product Security, you're probably working for a software company and they want to minimize shipping bad quality design and code to their customers.

Why? Because customers pay the bills.

In Product Security, you of course are going to be working with developers in one scenario or another. Their job is to ship software that the customers want.

Why? See above.

Responsibilities of PS engineer are to make sure a subset of that bad quality in software, the security-relevant stuff, doesn't ship and create holes for attackers to exploit and gain access to customer's systems.

The customer's expectations are usually that of "nobody outside of who we allow to access this resource can access it". That then must be matched by the software company in order to maintain a healthy relationship.

Why? See above.

Responsibilities of a developer, or software engineer, are that of designing to spec and implementing a working product. Quality varies from company to company and is often only if the customer notices it. If they do notice it, a rush to fix it occurs.

Why? See above.

But on average, PS personnel are paid significantly more than developers. They both either work close or own the quality of the code. They both either have formal engineering backgrounds or are strmeongly suited otherwise. They likely even work for the same organization within the company and snap to similar performance levels.




Let's look at a couple other key similarities:

If the security or reliability aspects of the software's quality is down, customer loss is imminent. Both roles are responsible and affected.

If the same aspects are up, customer contracts are probably going to remain stable or even increase. Both roles are responsible and affected.

So why do PSs typically make more euros than SEs? Are they just better negotiators, master interviewees or perhaps metahumans?

"There's more of me than they are of you"

If there's a 100:1 ratio, after considering all the other company-specific factors, it makes sense.

Are you special if you work in PS and make more than the SEs down the way?

Not necessarily-- you've just found a niche in the industry like many others.

And as long as customers are paying the bills and you have skills (no pun intended), you'll continue to earn a nice living. But companies also like money-saving techniques such as automation. How long is it before we automate ourselves out of our own jobs? Questions for another time.

Choo-choo: all aboard The Singularity express.

Sunday, March 13, 2016

Programming the Weird Machine

Halvar Flake tends to put things quite beautifully, as he did near the end of a recent talk he did in Singapore per Rowhammer. At the end, he went on to explain to the audience, in the simplest terms, what exploitation actually was.

You can take a look here if you like. I've just tried to elaborate on his words.

Think of a state machine. It has states and transitions between those states. When writing software, developers tend to take the state machine in their head (or as described in the design documents) and implement it in code. They then logically account for states to which they intend.

If they make a mistake during design or implementation where they didn't or cannot account for other state transitions the software might make, and it is possible to put the program into these states, the result could be considered a bug. A security bug (eg. vulnerability) would be putting the software into a weird state that impacts the security of the system and exploitation is programming the weird machine. The weird machine being the series of new states that was found.

Transforming this into security-speak, a proof-of-concept is something that uses a bug to drive the program in a weird state, therefore proving there the concept usually describing the bug. An exploit fully programs at least one path of the weird machine to a useful destination weird state.

Here's some examples for just one bug:

- A bug which has a provided PoC that crashes that program, but is theorized for only a denial of service impact; it's weird state is driven somewhat to fruition.

- The same bug and modified PoC which another theorizes that the impact can be code execution; it's weird state is partially driven.

- An exploit is written for the bug that claims to execute an arbitrary payload; it's weird state has been fully realized in one direction.

- Another exploit is written and not only executes arbitrary code, but does so after transitioning to a state which runs privileged code; it's weird state may been fully realized at this point, unless someone else comes along and finds an even more useful transition, or it's proved there isn't one, or even if it's proved somehow the state was false.

I found these words very tasteful and an excellent mindset to have when reasoning about vulnerabilities and exploitation. The best part is that these words have the potential to used as foundations that make new ideas so more much clear and precise.