Time again to annoy a bunch of my peers, but only the ones who skim articles rather than take time to read fully:
Security quality is disjoint from openness. Free and Open Source Software (FOSS) is clearly and famously not less secure than closed / proprietary software - but neither is FOSS necessarily more secure than proprietary.
Instead, the characteristic that one associates with "good" code - secure code, robust code, useful, interesting code - is the existence of a dedicated community of intelligent and aware developers, all of whom have access to the codebase and the freedom and autonomy to "review" and "commit" on prettymuch any topic.
Open Source makes the development of such a community a hell of a lot easier, but it is not mandatory, and it is further my experience that FOSS projects can turn more easily toxic, with greater people-management issues. Anyone who has worked in FOSS for a few years can name a few towering egos and intra-project troublemakers whom they would have liked to have "fired".
So I know and have worked with both FOSS and proprietary software teams who have produced good, secure code; and I also seen some stinkingly bad, insecure crap emitted from both camps. Thus when Steven Chang of Trend Micro opines that:
"Android is open-source, which means the hacker can also understand the underlying architecture and source code." (...) "We have to give credit to Apple, because they are very careful about it. It's impossible for certain types of viruses (to operate)".
...I believe he's talking through his hat.
There's no reason to believe that Apple's iPhone iOS is better or worse than Android from a security perspective - at least from the perspective of openness. There may be more fundamental architectural issues to distinguish the platforms but (again) they both have Unix-like heritage, so they both start from a good place.
Further he appears to be saying that there are "good developers" who need to understand how (a platform) works, and there are "evil developers" who should not be permitted that understanding; but how are good developers to be prevented from becoming latterly evil?
And how will the good developers gain an initial understanding of (the platform) in the face of obscurantism meant to prevent evil developers from gaining the same understanding?
It appears to be an insane proposition; unfortunately Steven conflates these statements with common sense like:
"Smartphones are the next PC, and once they're adopted by enterprises, data loss will be a very key problem"
...which is utterly defensible, if only because phones will become ever more smart, storage will grow ever larger, and we already know what media circus occurs when some Government functionary leaves a laptop on a train.
The actuality bears repeating: software security is in no way a function of development openness. FOSS is great for bug finding, and Kerckhoffs's Principle applies, but it states that security of a system is greatest when it is robust in the face of an enemy who knows everything about it.
But to extrapolate that FOSS development magically provides security because enemies automatically know everything about the system, is cargo cult thinking; and to make Chang's opposite assertion that openness undermines security is equally incorrect.
Instead it would be fair to say that FOSS is probably the superior method to gather a community of clueful people together - people who can cross-check each other and thereby develop secure code.
I would say that - but I would also say that it's only about a 60/40 or 70/30 benefit, because vibrant and security-aware development communities can also be synthesised in the proprietary world.