#HITBGSEC Does Security by Obscurity Work? Attacking Software Tokens
L33tdawg: In Singapore next week? You don't want to miss this #HITBGSEC talk - http://gsec.hitb.org/sg2016/sessions/attacking-software-tokens/
No, it doesn't. This is what common security wisdom says and I belonged to that school of thought for most of my security expert life. That said, in the 2000s I did my fair share of malware analysis, and even as a strong believer in the above claim, I couldn’t deny that some of the defenses the malware authors had come up with were at least mildly annoying. To be honest, analyzing these things was probably some of the most challenging work of my career.
During the last couple of years, my team and I started re-encountering anti-RE in our daily work – only this time it wasn’t malware or DRM (not that there's a big difference between the two). Instead, it now was regular apps that refused to be reverse engineered. In the mobile world security by obscurity was becoming a thing.
This rise of obfuscation and anti-tampering in run-of-the-mill mobile apps is causing some confusion amongst security testers - this became clear to me when I started working on the OWASP Mobile Security Testing Guide earlier this year. Terminology such as “vulnerability to reverse engineering attack” was floating around. It wasn’t clear how, if at all, mobile apps were required to impede reverse engineering. Is it a vulnerability not to prevent reverse engineering? Should apps that store sensitive data refuse to run on rooted devices? Should symbols always be stripped? How much obfuscation is enough?