The troubled US HealthCare.gov site skipped many security requirements before launching, and was granted a waiver by the Obama administration to launch despite a level of uncertainty and a high-level of security risk.
CBS News has reported that the final security checks for HealthCare.gov were delayed three times, and that the final overall security assessment was never performed. While aware of these setbacks and delays, including the lack of security assessments, the Obama administration granted itself a waiver to launch HealthCare.gov with a "level of uncertainty" that was deemed as a high risk.
Ben Simo, one of the security researchers following HealthCare.gov, has blogged about many of the issues the website has, including keeping a running tab on the privacy and security problems. Such issues include information disclosure, attack surfaces, and direct privacy violations. While CMS, the company responsible for the development of HealthCare.gov, is fixing problems on a daily basis, the fact such obvious issues existed at all has frustrated many security experts.
Kyle Adams, the chief software architect for Junos WebApp Secure at Juniper Networks, examined a few healthcare websites on his own, including websites managed by Kentucky, Vermont, and Maryland, as well as HealthCare.gov. While sticking to details transmitted to all users via HTTP and presented via HTML, he found that many of the sites contain common errors that would have been flagged during a static code analysis test.
Adams quickly learned that each of the websites were developed with different architectures and languages. For example, HealthCare.gov is developed with Java on top of Tomcat, with WS02 services layer and a Google search appliance. Moreover, Kentucky was developed on ASP.net, Vermont was developed with Java, and Maryland managed to keep their development secrets hidden.
"In all cases, except Maryland, a fair amount of backend implementation information is disclosed to the client. This is generally not advisable, because it allows attackers to target their attacks more efficiently. It also allows attackers to identify the architecture and find holes in the business logic and code interactions. I give Maryland credit for hiding that information so efficiently, however it's possible it is just as transparent as the others once you get past the login (I couldn't verify)," Adams told CSO.
"In Kentucky's case, they chose a language that has this transparent quality designed into the framework, which is not a wise choice for such a sensitive website. ASP.net is notorious for this, because it enables developers who do not fully understand web development to build complex web applications... It's also a red flag for attackers, because it means the developers probably didn't understand the details of web development well enough to protect themselves from common vulnerabilities."
Both Kentucky's website and HealthCare.gov exhibited signs of buggy code, Adams said, as they produced errors suggesting that developers didn't handle specific conditions. Again, this is also a red flag, because buggy code usually means vulnerable code. The likely types of attack are also the most common for this same reason.
"The likely first and most popular attacks will Cross-Site Scripting (XSS), SQL Injection, Cross-Site Request Forgery (CSRF) and Open Redirection. Some of the sites did in fact have signs of CSRF protection, but not all of them. I think the first round of exploits we are likely to see will involve phishing," he explained.
"For example, if an attacker finds an XSS, CSRF, or Open Redirect, all of those would allow them to launch extremely effective phishing campaigns. SQL Injection is a little harder to find and use, so that will likely come next. Once most of the low hanging fruit has been addressed, I would expect the attacks to start focusing more on business logic exploits and mass data exfiltration."
Addressing these types of problems can come from a number of things, including Web Application Firewalls, Intrusion Deception, DDoS protection and IDS. However, regular code auditing and security tests should also be considered.
"All companies dealing with development on these sites should also adopt a very strict secure development lifecycle to avoid introducing new issues as the old ones are patched," Adams said.
Then again, none of that does any good if you can just grant yourself a waiver and skip security assessments before a product launch.