Observatory by Mozilla: Making the Web Safer

It’s been over 25 years since Tim Berners-Lee created the first web browser, giving humanity the ability to easily access and transmit information with people both strange and familiar. And in the following 25 years of evolution, browser makers such as Mozilla, Microsoft, and Google have created numerous security technologies to protect both users and websites from bad actors from those whose goals are to steal user secrets, install malware, or otherwise ruin Berners-Lee’s vision of what the world wide web could be.

Unfortunately, due to their complexity, many of these technologies have struggled with adoption. Critical security technologies such as HTTPS are in use by only 40% of the world wide web, and adoption rates for other technologies only drop from there. Today, I (and Mozilla) am proud to release Observatory by Mozilla as a way to raise awareness of these security measure.

Observatory is a simple tool that allows site operators to quickly assess not just if they are using these technologies, but also helps them identify how well they’re being used. It uses a simple grading system to provide near instant feedback on site improvements as they are made. To assist developers and administrators, Observatory also provides links to quality documentation that demonstrates how these technologies work.

We’re All Failing

Just how bad is adoption? Well, the Observatory has been used to scan over 1.3 million websites so far, and 91% of them don’t take advantage of modern security advances. These aren’t tiny sites either; among these 1.3 million websites are some of the most popular websites in the world.

Overall Results
Passing 121,984
Failing 1,212,826
Total Scans 1,334,810

When nine out of 10 websites receive a failing grade, it’s clear that this is a problem for everyone. And by “everyone”, I’m including Mozilla — among our thousands of sites, a great deal of them fail to pass. We’re working very hard to fix them all! In fact, we’ve already used the Observatory to help improve many of our web sites, including addons.mozilla.org, bugzilla.mozilla.org, and mozillians.org.

We’re using the Observatory as a way to democratize website security best practices, and increase transparency around the application (or lack) of existing security features. We hope to help everyone make things better.

How and Why We Built Observatory

A little over a year ago, I was fortunate to be offered a job at Mozilla, helping to improve the security of their many websites. Finally, I would have an easy job where I could put my feet up and relax all day. After all, Mozilla makes Firefox — one of the world’s most popular web browsers — so it was a certainty in my mind that their websites would be locked down, secure, and fully taking advantage of all the security technologies that Mozilla had helped create.

With a future of easy work secured, I wrote a small scanning tool to examine Mozilla’s websites and report just how well we were doing. As it examined each new site, I realized with growing dismay that my future would indeed not be filled with relaxation but instead with many tiring hours of actual work. It turned out that Mozilla — Mozilla! — didn’t do a better job of keeping up with modern website security practices than any other company or group I had worked with before.

Closing the Knowledge Gap

For most security engineers, the next several months would be exclusively devoted to getting their own sites set up properly. Luckily, because I work for Mozilla, I was in a unique position. After all, Mozilla's mission isn’t simply to make a great web browser, but to improve the internet as a whole. I was encouraged to to work on my scanning tool and make it available for the world to use.

It turns out that knowledge of all these technologies was considerably more difficult to acquire than I had assumed - even for security professionals. In retrospect, it’s not surprising: these technologies are spread over dozens of standard documents and while individual articles may talk about them, there wasn’t one place to go for site operators to learn what each of the technologies do, how to implement them, and how important they were.

Guidelines and documentation are one thing: you can write documentation until you’re blue in the face, but if people aren’t interested in implementing them, adoption rates will still suffer. And so it was one day while working on a tool to test these same Mozilla sites that I struck upon an idea. A site called SSL Labs that tests website’s SSL/TLS configurations had done immeasurably good for the internet by gamifying the process of improving your server’s configuration. Faced with a public letter grade, users, organizations, and companies quickly moved towards improving their configuration.

Drawing upon their experiences, I went to work wrapping the Observatory in an easy-to-use website to make this knowledge available to more than just security professionals. Now anybody with a web browser, URL, and a bit of curiousity will be able to investigate the problems that their sites may have. By providing accessible and transparent results, every member of a development team - regardless of skill level and specialization - will be able to check the URLs that they own or depend on so that they can help push for better security practices that benefit all of us.

How does it work?

Just visit the site, enter a domain and click “scan me”. That’s it! You’ll get a report back. Below you can see the report for addons.mozilla.org, the website that Firefox users use to download new addons for their browser. It’s one of Mozilla’s most important websites, and served as an early test case for the Observatory.

Addons scanned with Observatory

When we first scanned it, addons.mozilla.org got an F, just like 91% of all websites. Assisted by the constant feedback of a slowly increasing grade and clear guidance on what needed fixing, the engineers on the addons team quickly improved their grade to an A+,

Testing (and Fixing) Made Easy

The Observatory performs a multitude of checks across roughly a dozen tests. You may not have heard of many of them, and that’s because their documentation is spread across thousands of articles, hundreds of websites, and dozens of specifications. In fact, despite some of these standards being old enough to have children (see Appendix below), their usage rate amongst the internet’s million most popular websites ranges from 30% for HTTPS all the way down to a depressingly low .005% for Content Security Policy.

Each test you run with the Observatory not only tells you how well you’ve implemented a given standard, but it links back to Mozilla’s single-page web security guidelines, which have descriptions, reasonings, and implementation examples for every test. You can use these guidelines in concert with Observatory scans to continuously improve and monitor the state of your website. For administrators who have lots of sites to test or developers who want to integrate it into their development process, we offer both an API and command-line tools.

We Can’t All Be Perfect

Of course, the results for the Observatory may not be perfectly accurate for your site -- after all, the security needs of a site like GitHub are a good deal more complicated than those of a personal blog. By encouraging the adoption of these standards even for low-risk sites, we hope to make developers, system administrators, and security professionals around the world comfortable and familiar with them. With their newfound knowledge and experience, we hope move from a 91% failure rate to a world with mostly passing grades, with more and more sites proudly displaying their A+ rating on the Observatory by Mozilla.

Want to help make the web a safer place? Let’s work together by testing your site today!


Appendix: A Brief History of Web Security Technologies

Year Technology Attack Vector Adoption†
1995 Secure HTTP (HTTPS) Man-in-the-middle
Network eavesdropping
29.6%
1997 Secure Cookies Network eavesdropping 1.88%
2008 X-Content-Type-Options MIME type confusion 6.19%
2009 - 2011 HttpOnly Cookies Cross-site scripting (XSS)
Session theft
1.88%
2009 - 2011 X-Frame-Options Clickjacking 6.83%
2010 X-XSS-Protection Cross-site scripting 5.03%
2010 - 2015 Content Security Policy Cross-site scripting .012%
2012 HTTP Strict Transport Security Man-in-the-middle
Network eavesdropping
1.75%
2013 - 2015 HTTP Public Key Pinning Certificate misissuance .414%
2014 HSTS Preloading Man-in-the-middle .158%
2014 - 2016 Subresource Integrity Content Delivery Network (CDN) compromise .015%
2015 - 2016 SameSite Cookies Cross-site reference forgery (CSRF) N/A
2015 - 2016 Cookie Prefixes Cookie overrides by untrusted sources N/A
† Adoption rate amongst the Alexa top million websites as of April 2016.

[Category: Security] [Permalink]


X-Content-Type-Options + passive content?

The question of how X-Content-Type-Options: nosniff interacts with passive content came up today on Twitter. I had always assumed that browsers would block passive content where the MIME type was incorrect and nosniff was set, but I decided to test.

Below is a delightful image of a Snorlax. It's a PNG, but the extension is .jpg and nginx delivers its MIME type as image/jpeg:

Sleeping Snorlax
© Nintendo or The Pokémon Company or Something

Can you see this image? I sure can. If you're using Firefox, Chrome, Safari, Edge — really anything but Internet Explorer — it shows up just fine.

Here is the same image, but with no extension. By default, nginx sets its MIME type as application/octet-stream:

Sleeping Snorlax
© Same corporations or people. But I repeat myself, apparently.

Here, Firefox (50+), Edge, and Internet Explorer block it, but Chrome and Safari display it just fine.

How about audio? With HTML5 audio, you get to tell it exactly what the MIME type is! Here is an mp3 (audio/mpeg), but it has a .ogg (audio/ogg) extension and I've set the HTML5 audio type attribute to audio/mp4:

© Hiroyuki Masuno? Maybe?

IE11, Edge, and Safari fail due to MIME type confusion, but not because of X-Content-Type-Options. Firefox and Chrome? They play that sweet, sweet 1987 video game music just fine.

In conclusion, I should stop making assumptions about how browsers behave, particularly when it comes to quasi-standards like X-Content-Type-Options.

[Category: Security] [Permalink]


Offensive playmats in Magic

Last Saturday (May 14th), I participated in a panel at a local judge conference on the topic of women in Magic:

It was a ton of fun, and I'm always excited for an opportunity to collaborate alongside Morgan and the Magic the Amateuring cast. Unfortunately, it led to a huge uproar on Reddit, YouTube, and Twitter.

One topic in particular led to a much outrage, and that was on the subject of “offensive” playmats. I completely understand this: players invest a lot of time in choosing the perfect playmat that represents their hobbies and personalities. I know I've spent countless hours searching for the ideal image and ended up having to write a heartfelt email in Japanese to get the high-resolution version of the image that I use for my current playmat:

お世話になった方々, by ポージョンX
お世話になった方々, by ポージョンX

To that end, I wanted to help dispel some of the misconceptions around these playmats, at least from my experiences as a judge.

  • Who are you or anyone else to decide what is or is not offensive?

This is a completely fair question, and was certainly the most common and pointed of the questions I received. And it's totally justified: nobody wants to participate in a community where they are made to feel like they are being censored. And the line on what is or is not offensive is extremely hard to draw. For example, is this offensive?

This is a popular playmat that currently being sold, and is one that I have seen at a event that I've judged. While I don't find it personally offensive, it's exactly the sort of playmat that I feel doesn't belong at an event that is open to the public at large.

Of course, who am I to judge whether a playmat, card sleeve, or altered card belongs at an event or not? Well, it's part of the job. Judges have a document that outlines what behaviors do and don't constitute infractions. And while many rulings are very clear cut, e.g., drawing four cards off of Ancestral Recall, many infractions and situations require the judge to use their best judgment. For example, here is one of the criteria for Unsporting Conduct — Minor:

  • A player uses excessively vulgar and profane language.

As with playmats, what can be considered “vulgar” and “profane” varies extremely widely depending upon on the audience. There's no strict definition of what these terms mean, and so it is left up to judges to determine whether or not such language falls under that classification in that circumstance. Playmats are no different. Judges use their best judgment — based upon the event and audience — to determine whether a playmat should or should not be usable at an event. This happens regardless of whether the judge takes action under their own initiative, or whether they are approached by a player. And when it does occur, it almost always involves the agreement of multiple members of the judge staff.

  • Why are you and other judges constantly imposing your morality on Magic players?

We're not! I've been judging since Innistrad block, and have judged everything from prereleases to Grand Prix. In all those events over the last five years, I've only seen or heard of a player being asked to put away their playmat about a half dozen times — roughly once a year. Although I regularly see playmats that make me, personally, a bit uncomfortable, taking action requires something truly extraordinary.

And despite concerns that players will approach me complaining about playmats featuring art like this:

Bloodbraid Elf, by Steve Argyle
Bloodbraid Elf, by Steve Argyle

…it's just something that doesn't happen. And if it did, I would inform the player that the playmat is perfectly acceptable and while I empathize with their concerns, I'm not going to ask the player to put it away.

  • Why should players be punished for liking what they like?

First of all, it should be clear that requests that involving players putting away their playmat are not an infraction and are not accompanied by any sort of penalty. Instead, players are politely asked to put their playmat into their backpack, and are provided with an alternative playmat if they don't have one available. And in all those half-dozen experiences, I've never seen a player express any serious outrage; these requests have always been a complete non-event.

L3 judge Rob McKenzie recounted a conversation that he had with a player about his playmat:

Rob: Uh, could you please turn it [a playmat featuring a guy giving the opposing player the middle finger] over?
Player: Oh! Yeah! [turns playmat over] Why did I think this was okay, and why did you not catch this in the last six rounds?
Everyone: much laughter
  • Is this not a violation of a player's first amendment right to free speech?

Restrictions on free speech are about the government restricting the free speech of the public. It has absolutely nothing to do with what art a player is allowed to display at a private event on private property that is nevertheless open to the public. I and other judges take a player's right to self-expression extremely seriously, and attempt to tread very lightly when it comes to these requests.

Overall, I want to reiterate that these events are extremely rare and that judges and the community are extremely lenient and forgiving when it comes to a player's choice of playmats and sleeves. Asking a player to put away their favorite playmat is only done with extreme circumspection and typically involve the agreement of multiple members of the judge staff.

[Category: Magic] [Permalink]


Analysis of CSP in the Alexa Top 1M sites (April 2016)

I recently wrote about the state of security in the Alexa Top 1M sites, particularly the depressingly low utilization of the many security headers available to site developers. Today, I'm talking about Content Security Policy (CSP).

By whitelisting specific sources of content and by disabling the use of inline JavaScript, CSP can nearly eliminate the class of attacks known as cross-site scripting (XSS) attacks. So how common is its usage amongst the Internet's most popular websites? Let's take a look:

Result Count Percentage
CSP implemented without using 'unsafe-inline' or 'unsafe-eval' 45 .0047%
CSP implemented the same as above, but with default-src 'none' 8 .0008%
CSP header allows style-src 'unsafe-inline' 61 .0064%
CSP header allows script-src 'unsafe-eval' 68 .0071%
CSP header uses http: source on an https site 15 .0016%
CSP header invalid 27 .0028%
CSP header allows script-src 'unsafe-inline' 3392 .3540%
No CSP header 954791 99.62%
Total number of successfully completed scans 958407  

Yes, that's correct: only about .37% of the top million sites use CSP at all, and of that tiny percentage, only 3.3% (.012% overall) have strong CSP policies that block the use of inline JavaScript. For a specification that has had wide browser support for over two years, that's almost embarrassingly low. I'm not sure if it's because the CSP specification is too complicated to understand or too complicated to implement, but web security professionals are failing here.

Of that meager .37%, what CSP directives are seeing use?

Directive Count Percentage
script-src 2500 69.66%
style-src 2016 56.17%
default-src 1913 53.30%
img-src 1555 43.33%
frame-src 1344 37.45%
font-src 1317 36.70%
connect-src 1203 33.52%
report-uri 1037 28.89%
object-src 980 27.31%
frame-ancestors 916 25.52%
media-src 912 25.41%
child-src 126 3.51%
form-action 70 1.95%
reflected-xss 39 1.09%
referrer 33 0.92%
base-uri 22 0.61%
sandbox 15 0.42%
plugin-types 4 0.11%
manifest-src 1 0.03%
block-all-mixed-content 0 0.00%
upgrade-insecure-requests 0 0.00%

It's interesting to note how common frame-src, referrer, and reflected-xss are, considering they have been deprecated since CSP1. I myself struggled with removing frame-src, simply because child-src is not yet supported everywhere.

Although the HTTP Observatory doesn't currently try to catch errors in CSP policies, they are quite common. In my investigations, I discovered that over 3% of CSP policies contained errors. Here are some of the more common errors I discovered:

Content-Security-Policy: *
Content-Security-Policy: 'self'
Content-Security-Policy: allow https://example.com ...
Content-Security-Policy: "default-src https://example.com ..."

I have no idea how the browsers interpret these errors, but it's almost certainly not what the site operator intended. Whoops! The upcoming version of the HTTP Observatory should report on these types of errors so that site operators can be certain that browsers aren't misinterpreting their intentions.

[Category: Security] [Permalink]