Answering FEE Interview Questions

The main impetus behind setting this site up was to get my head back to thinking about the web as more than an eternal consumption engine. I’m hoping to get back into tech, and to do so I will have to pass roughly 97 years of tech interviews, a daunting task even when you’ve been neck deep in this stuff everyday. I’ve been gone for 3 years, so I really have some work to do.

My friend pointed me toward some resources, and I’d like to use this as a staging site to think about the questions and answer them, and hopefully sandbox some of the stuff encountered. I’m starting with a popular GitHub Repo that has a long outline of questions to ponder, and I’ll be chiseling away at some of them over the next few posts.

Q. What'd I learn this week?

Quite a bit! For starters, I've been learning Markdown syntax. I've forever hated the thought of things like Markdown, TypeScript, CoffeeScript, etc. HTML is easy enough as it is, and the thought of littering my Javascript with something as inane as significant indentation makes my blood run cold. I also freaking hate the phrase "syntactic sugar," so those things can go suck an egg.

Fooling with Markdown for a few days now though has made me appreciate that it does allow an author to focus on what he's writing, rather than the exact semantics behind the words. I still care a ton about the semantics, but I can go back and add more appropriate elements in where needed, and Markdown gives me the freedom to mix plaintext with HTML, so it's not bad!

(heh, as I wrote and previewed this, I saw that Markdown wasn't going to let me use its syntax inside the definition list I think is a wonderful element for a Q&A. Oh whale.)

Q. When building a new web site or maintaining one, can you explain some techniques you have used to increase performance?

One thing that used to plague larger sites was linking to a lot of external resources — CSS, JS, icons. Granted, several years ago we were quite limited with only a literal handful of concurrent requests available, there are still considerations to be made today when modern browsers can have around 17 simultaneous connections (especially when, as of this writing, only 6-13 are available per hostname).

Some of the biggest performance gains were made with minifying and concatenation, that is obfuscating your JS and CSS with terse variable and function names, removing white space and linefeeds, and mushing several files into one larger one. Gzip over your HTTP connection further improves the speed of this download. For things like icons, we introduced CSS sprites, which put several icons into one file and then shift its position via the background-position CSS property.

There's a helluva lot more to this than just managing downloads though. Other strategies involve bottom-loading JS, delayed image downloads (which rely upon JS), prioritizing content above the fold, or using things like AMP.

Q. Can you describe your workflow when you create a web page?

My workflow has always been to attack the content first, making sure that I'm using the most semantically-appropriate elements and that the page's hierarchy or its structure makes sense even in the absence of styling. I'll never forget years ago at LinkedIn we had a CDN outage, and someone on Twitter had commented that despite there being little to no graphical or layout styles, the content, navigation and (most of) the functionality were still present. We took great pride in using progressive enhancement to make sure that the site remains useable even when things go to shit.

Q. If you jumped on a project and they used tabs and you used spaces, what would you do?

I'd burn the place to the ground, along with any of the tab-using Philistines.

Q. Describe how you would create a simple slideshow page.

I'd start with an ordered list, including markup for each image in the slideshow. This allows the full content to be available to screen readers, search engines, and any other tools someone might want to throw at the page.

Using CSS, I'd change the visibility to hide all but the first image in the slide show and position it all absolutely. To prevent bouncing as images change, the parent element housing the images would be given a static size. I'd animate the opacity for each transition, fading out then in as the images changed.

Javascript would be required to set a timer for each image's display, and to swap styles to make one image invisible and another visible, probably with a classname swap.

Q. Explain what ARIA and screenreaders are, and how to make a website accessible.

Accessible Rich Internet Applications, duh.

ARIA implementations involve using attributes in your markup to make your content and applications more accessible to people with disabilities. For instance, though we have a dedicated <footer> element now to denote the end of an article, section, or document, there might be several on a single page. Using the ARIA attribute role="contentinfo", we can denote the main template's footer where our contact info or privacy or copyright statements reside.

ARIA's very useful for applications where state changes alter the content and layout of a page, or change a control's visibility or disabled state.

Q. Explain some of the pros and cons for CSS animations versus JavaScript animations.

Due to web browsers' threading, you may get some performance gains using CSS to animate simple properties, specifically transform and opacity. In this scenario, the browser's main thread may be occupied by some intense Javascript operations, but it can still handle our small changes in a side thread that will go uninterrupted.

It's likely that you're already using a JS library for your site's functionality, which typically includes competent animation support. The interface for animation via JS is often simpler and more intuitive than that provided by CSS.

Q. What does CORS stand for and what issue does it address?

Cross-Origin Resource Sharing (CORS). Essentially, any request for a resource that extends beyond the current domain. Could be simple GETs for CSS, images, or Javascript, or form submissions and XHR requests. It allows the browser to send certain HTTP headers to the server to see if it's allowed to access a resource, what it's allowed to do with that resource, and what HTTP headers it's allowed to modify.

Damn, that went long! But it was a great exercise for me, reacquainting myself with CORS, ARIA, and the ways in which I used to develop stuff.

Posted under: HTML