Since then, a number of additional assessments have done, including in October 2016 and June 2017. Both of those surveys demonstrated clear and continual improvement in the state of internet security. But now that tools like the Mozilla Observatory, securityheaders.io and Hardenize have become more commonplace, has the excitement for improvement been tempered?
Improvement across the web appears to be continuing at an steady rate. Although a 19% increase in the number of sites that support HTTPS might seem small, the absolute numbers are quite large — it represents over 83,000 websites, a slight slowdown from the previous survey's 119,000 jump, but still a great sign of progress in encrypting the web's long tail.
Not only that, but an additional 97,000 of the top websites have chosen to be HTTPS by default, with another 16,000 of them forbidding any HTTP access at all through the use of HTTP Strict Transport Security (HSTS). Also notable is the jump in websites that have chosen to opt into being preloaded in major web browsers, via a process known as HSTS preloading. Until browsers switch to HTTPS by default, HSTS preloading is the best method for solving the trust-on-first-use problem in HSTS.
Content Security Policy (CSP) — one of the most important recent advances due to its ability to prevent cross-site scripting (XSS) attacks — continues to see strong growth. Growth is faster in policies that ignore inline stylesheets (CSS), perhaps reflecting the difficulties that many sites have with separating their presentation from their content. Nevertheless, improvements brought about by specification additions such as 'strict-dynamic' and policy generators such as the Mozilla Laboratory continue to push forward CSP adoption.
Despite this progress, the vast majority of large websites around the web continue to not use Content Security Policy, Strict Transport Security, or Subresource Integrity. As these technologies — when properly used — can nearly eliminate huge classes of attacks against sites and their users, they are given a significant amount of weight in Observatory scans.
As a result of their low usage rates amongst established websites, they typically receive failing grades from the Observatory. But despite new tests and harsher grading, I continue to see improvements across the board:
As 976,930 scans were successfully completed in the last survey, a decrease in failing grades by 2.9% implies that over 27,000 of the largest sites in the world have improved from a failing grade in the last eight months alone. Note that the drop in A grades is due to a recent change in not allowing extra credit points to be granted to scans that didn't score at an A without them.
Thus far, over 140,000 websites around the web have directly used the Mozilla Observatory to improve their grades, indicated by scanning their website, making an improvement, and then scanning their website again. Of these 140,000 websites, over 2,800 have improved all the way from a failing grade to an A or A+ grade.
When I first built the Observatory at Mozilla, I had never imagined that it would see such widespread use. 6.6M scans across 2.3M unique domains later, it seems to have made a significant difference across the internet. I couldn't have done it without the support of Mozilla and the security researchers who have helped to improve it.
Please share the Mozilla Observatory so that the web can continue to see improvements over the years to come!
I was recently writing some code for the Mozilla Observatory to store and interact with the HTTP status codes. As part of my code, I wanted to ensure that I would only store these status codes if they were an integer as per the HTTP/1.1 specification:
The status-code element is a three-digit integer code giving the
result of the attempt to understand and satisfy the request.
While it is easy to create test cases for conditions that don't satisfy this requirement, it is somewhat more difficult to determine how third-party libraries will handle HTTP requests that fall outside this constraint. I looked around the internet for websites to help me test weird status codes, but most of them only let me test with the known status codes. As such, I decided to add arbitrary HTTP status codes to my naughty httpbin fork, called misbehaving.site.
What I discovered is that the various browser manufacturers have wildly different behavior with how they handle unknown HTTP status codes. Here is what the HTTP specification says that browsers should do:
HTTP status codes are extensible. HTTP clients are not required to
understand the meaning of all registered status codes, though such
understanding is obviously desirable. However, a client MUST
understand the class of any status code, as indicated by the first
digit, and treat an unrecognized status code as being equivalent to
the x00 status code of that class, with the exception that a
recipient MUST NOT cache a response with an unrecognized status code.
…so what happens in reality?
Chrome's behavior is strange, but surprisingly not the strangest of the major browsers:
For negative status codes, Chrome always displays HTTP status code 200. For 0, it simply displays Finished instead of the actual status code. It otherwise simply reflects the status code, unless it exceeds 2147483647 (231-1), in which case it displays 2147483647.
Note that when exceeding 2147483647, it displays this error in the console, despite the page otherwise loading normally:
It actually took me quite a while to figure out Firefox's behavior. Let's take a look:
Status codes in Firefox are modulo 65536 (216), unless it works out to 0, in which it displays status code 200.
This works up to a certain point, when it starts to display different behavior:
Note how the status icon (blue dot, yellow triangle, etc.) is dependent on the first digit of the status code, once Firefox has finished interpreting it.
Safari only accepts status codes between 1 and 999. Should the status code fall outside that range, it reflects the entire HTTP request as plaintext, headers and all:
It also displays this error in the browser console. I'm not sure why, as the output is just JSON and there isn't any script on the page:
Note if you serve from localhost instead of a remote server, it displays a different error:
Not to be left behind, Edge also has some unusual HTTP status code handling:
For status code 0, it displays (Pending), although the page otherwise loads normally. For negative status codes, it displays them as the status code modulo 4294967296 (232). This is unless the status code is less than -4294967295, in which case it displays 1.
For positive status codes, it simply reflects them. This is until the status code reaches 4294967296 or higher, in which case it shows (Pending) and the browser displays this error:
Those who have been around in computing for a long time are likely familiar with Postel's Law:
Be liberal in what you accept, and conservative in what you send.
While it seems like the neighborly thing to do, it is the bane of those of us who enjoy consistent software behavior. If the specification had simply stated that status codes falling outside 100-599 should be treated as an unrecoverable error, then we wouldn't see the unusual behavior that we see today.
Luckily, while all of the browsers have their own idiosyracies, none of them are actually harmful in this case.
If you enjoyed this post and would like to test how browsers handle other quirky HTTP responses, please consider opening an issue or sending a pull request to the misbehaving.site github repository.
A few months after the Observatory's release — and 1.5M Observatory scans later — I reassessed the Top 1M websites. The situation appeared as if it was beginning to improve, with the use of HSTS and CSP up by approximately 50%. But were those improvements simply low-hanging fruit, or has the situation continued to improve over the following months?
The pace of improvement across the web appears to be continuing at an astounding rate. Although a 36% increase in the number of sites that support HTTPS might seem small, the absolute numbers are quite large — it represents over 119,000 websites.
Not only that, but 93,000 of those websites have chosen to be HTTPS by default, with 18,000 of them forbidding any HTTP access at all through the use of HTTP Strict Transport Security.
The sharp jump in the rate of Content Security Policy (CSP) usage is similarly surprising. It can be difficult to implement for a new website, and often requires extensive rearchitecting to retrofit to an existing site, as most of the Alexa Top 1M sites are. Between increasingly improving documentation, advances in CSP3 such as 'strict-dynamic', and CSP policy generators such as the Mozilla Laboratory, it appears that we might be turning a corner on CSP usage around the web.
Despite this progress, the vast majority of large websites around the web continue to not use Content Security Policy and Subresource Integrity. As these technologies — when properly used — can nearly eliminate huge classes of attacks against sites and their users, they are given a significant amount of weight in Observatory scans.
As a result of their low usage rates amongst established websites, they typically receive failing grades from the Observatory. Nevertheless, I continue to see improvements across the board:
As 969,924 scans were successfully completed in the last survey, a decrease in failing grades by 2.8% implies that over 27,000 of the largest sites in the world have improved from a failing grade in the last eight months alone.
In fact, my research indicates that over 50,000 websites around the web have directly used the Mozilla Observatory to improve their grades, indicated by scanning their website, making an improvement, and then scanning their website again. Of these 50,000 websites, over 2,500 have improved all the way from a failing grade to an A or A+ grade.
When I first built the Observatory a year ago at Mozilla, I had never imagined that it would see such widespread use. 2.8M scans across 1.55M unique domains later, it seems to have made a significant difference across the internet. I feel incredibly lucky to work at a company like Mozilla that has provided me with a unique opportunity to work on a tool designed solely to make internet a better place.
Please share the Mozilla Observatory so that the web can continue to see improvements over the years to come!
Allows 'unsafe-inline' in neither script-src nor style-src
Allows 'unsafe-inline' in style-src only
Amongst sites that set cookies
Disallows foreign origins from reading the domain's contents within user's context
Redirects from HTTP to HTTPS on the same domain, which allows HSTS to be set
Redirects from HTTP to HTTPS, regardless of the final domain
One of the perennial complaints about MTGO streams and recordings is how difficult the cards are to read. And it's no surprise — pretty much any program would struggle with the requirements that Magic imposes. It has to combine sometimes obscene amounts of text with an eye-wateringly small rendering area of less than a dozen square centimeters.
This problem is exacerbated by the fact that most MTGO streamers record at the standard resolution of 1080p. There are simply not enough pixels available to legibly render a font in such a small area at such a low resolution. Here is an example of what I mean:
Left: 1080p Channel Fireball recording Middle: 4K recording, smallest hand size
Right: 4K recording, standard hand size
This is not meant to pick on Channel Fireball. Their content is impeccable, but much of it is hard to read unless you know exactly what is going on. To be clear, it's not just Channel Fireball; even professional content put out by Wizards of the Coast suffers from similar issues. Unless you follow Standard, I suspect you'll have a hard time telling me what these cards from the recent Magic Online Championship do:
Is 4K really that big of a deal?
YouTube displays recordings in a variety of resolutions, from 144p all the way up to 2160p (4K). It may not seem as if there is a big difference between 1080p and 2160p, but remember that the “1080” in “1080p” only refers to the number of vertical pixels. In terms of overall pixels, there is a pretty vast gulf between 1080p and 4K:
% of 1080p
As you can see, 4K gives you 400% more pixels to render legible text in the exact same amount of screen space. That's why the Deathrite Shaman on the right can clearly display its entire text box despite taking up the exact same amount of space as the Deathrite Shaman on the left.
Cranking OBS to 11
Before I began recording, I simply assumed that 1080p recordings were a matter of inertia. Everybody recorded in 1080p, so what was the point in trying to bump the resolution up to 4K? After all, 4K means more CPU usage, larger files, and slower downloads. Why bother when nobody else was doing it?
It turns out my assumptions were way, way wrong. It turns out that it's actually really difficult to record in 4K while playing MTGO. My machine isn't top-of-the-line, but it's nothing to scoff at: a 2013 Macbook Pro with quad 2.6GHz i7 CPUs and 16GB of memory.
I assumed that all I'd have to do was tell OBS to record at the native “Retina” resolution (3360x2100) and I could go on my merry way. What happened when I did that?
PAIN. Telling OBS to record at 4K in real time took so much CPU that my machine was rendered completely useless. I couldn't actually play MTGO, because each click took over 15 seconds to register.
I then tried telling OBS to use one of its faster CPU settings (ultrafast), but the image quality came out very poor, with lots of noise and other encoding artifacts:
Path to 4K
This process left with a whole new respect and understanding for the professionals who do these recordings. It's simply impossible to have:
High quality 4K recordings
Low CPU usage
Managably-sized video files that you can upload directly to YouTube
I realized I'd have to compromise and do my recordings in two steps:
Record with at high quality and low CPU usage, but large video sizes
Re-encode post-recording to generate high-quality videos with low file sizes
This is certainly more work, and takes a lot more time. But it comes with some benefits:
The recording takes very little CPU usage, instead of causing the typical OBS lag
The re-encoding takes 8-12 hours, but the settings used result in YouTube quickly generating all the other resolutions post-upload
My screen's resolution (3360x2100) is actually lower than 4K (3840x2160), but the re-encoding lets me upscale to 4K
The Technical Details
So how did I do it? Here are my OBS video settings:
Recording Format: mp4
Rescale Output: unchecked (native resolution)
Rate Control: CRF
Keyframe Interval: 0 (auto)
CPU Usage Preset: superfast
Variable Framerate (VFR): checked
These settings generate very high quality recordings that average about 1GB for every ten minutes of recording. Lowering the CRF value leads to higher quality files at the cost of increased CPU usage, and 12 was the highest quality my machine could handle. If you find these settings too aggressive, bump CRF to a higher number.
Once I am finished recording, I have an automated job that upscales and re-encodes with ffmpeg, using the optimal YouTube video settings: