Dimitri Glazkov

Web and About

What the Heck is Shadow DOM?

with 71 comments

If you build Web sites, you probably use Javascript libraries. If so, you are probably grateful to the nameless heroes who make these libraries not suck.

One common problem these brave soldiers of the Web have to face is encapsulation. You know, one of them turtles on which the Object-Oriented Programming foundation sits, upon which stands most of the modern software engineering. How do you create that boundary between the code that you wrote and the code that will consume it?

With the exception of SVG (more on that later), today’s Web platform offers only one built-in mechanism to isolate one chunk of code from another — and it ain’t pretty. Yup, I am talking about iframes. For most encapsulation needs, frames are too heavy and restrictive.

What do you mean I must put each of my custom buttons in a separate iframe? What kind of insane are you?

So we need something better. Turns out, most browsers have been sneakily employing a powerful technique to hide their gory implementation details. This technique is called the shadow DOM.

My name is DOM, Shadow DOM

Shadow DOM refers to the ability of the browser to include a subtree of DOM elements into the rendering of a document, but not into the main document DOM tree. Consider a simple slider:

<input id="foo" type="range">
 

Pop this code into any WebKit-powered browser, and it’ll appear like so:

Typical Slider Control on WebKit

Simple enough. There’s a slider track and there’s a thumb, which you can slide along the track.

Wait, what? There’s a separate movable element inside of the input element? How come I can’t see it from Javascript?

var slider = document.getElementsById("foo");
console.log(slider.firstChild); // returns null
 

Is this some sort of magic?

No magic, my fair person of the Web. Just shadow DOM in action. You see, browser developers realized that coding the appearance and behavior of HTML elements completely by hand is a) hard and b) silly. So they sort of cheated.

They created a boundary between what you, the Web developer can reach and what’s considered implementation details, thus inaccessible to you. The browser however, can traipse across this boundary at will. With this boundary in place, they were able to build all HTML elements using the same good-old Web technologies, out of the divs and spans just like you would.

Some of these are simple, like the slider above. Some get pretty complex. Check out the video element. It’s got trigger buttons, timelines, a hover-appearing volume control, you name it:

WebKit Video Element With Controls

All of this is just HTML and CSS — hidden inside of a shadow DOM subtree.

To borrow a verse from that magnetic meme duo, “how does it work?” To build a better mental model, let’s pretend we have a way to poke at it with Javascript. Given this simple page:

<html>
<head>
<style> p { color: Green; } </style>
</head>
<body>
<p>My Future is so bright</p>
<div id="foo"></div>
<script>
    var foo = document.getElementById('foo');
    // WARNING: Pseudocode, not a real API.
    foo.shadow = document.createElement('p');
    foo.shadow.textContent = 'I gotta wear shades';
</script>
</body>
</html>
 

We get the DOM tree like this:

<p>My Future is so bright</p>
<div id="foo"></div>
 

But it is rendered as if it were this:

<p>My Future is so bright</p>
<div id="foo"> <!-- shadow subtree begins -->
    <p>I gotta wear shades</p>
</div> <!-- shadow subtree ends -->
 

Or visually like so:

Shadow DOM Example

Notice how the second part of the rendered sentence is not green? That’s because the p selector I have in my document can’t reach into the shadow DOM. How cool is that?! What would a framework developer give to have powers like this? The ability to write your widget and not worry about some random selector fiddling with your style seems … downright intoxicating.

Course of Events

To keep things natural, events fired in shadow DOM subtree can be listened to in the document. For instance, if you click on the mute button in the audio element, your event listeners on an enclosing div would hear the click:

<div onclick="alert('who dat?')">
    <audio controls src="test.wav"></audio>
</div>
 

However, if you ask to identify who fired the event, you’ll find out it was the audio element itself, not some button inside of it.

<div onclick="alert('fired by:' + event.target)">
    <audio controls src="test.wav"></audio>
</div>
 

Why? Because when crossing the shadow DOM boundary, the events are re-targeted to avoid exposing things inside of the shadow subtree. This way, you get to hear the events, fired from the shadow DOM, and the implementor gets to keep their details hidden from you.

Reaching into Shadows with CSS

One other trick up the sleeve is the ability to control how and whether CSS reaches into the shadow subtree. Suppose I want to customize the look of my slider. Instead of the standard OS-specific appearance, I want it be stylish, like so:

input[type=range].custom {
    -webkit-appearance: none;
    background-color: Red;
    width: 200px;
}
 

The result I get is:

Slider with a custom-styled track

Ok, that’s nice, but how do I style the thumb? We already determined the that our usual selectors don’t go into the shadow DOM tree. Turns out, there’s a handy pseudo attribute capability, which allows shadow DOM subtrees to associate an arbitrary pseudo-element identifier with an element in the subtree. For example, the thumb in the WebKit slider can be reached at:

input[type=range].custom::-webkit-slider-thumb {
    -webkit-appearance: none;
    background-color: Green;
    opacity: 0.5;
    width: 10px;
    height: 40px;
}
 

Which gives us:

Fully custom-styled slider

Ain’t it great? Think about it. You can style elements in the shadow DOM without actually being able to access them. And the builder of the shadow DOM subtree gets to decide which specific parts of their tree can be styled. Don’t you wish you had powers like this when building your UI widget toolkit?

Shadows with Holes, How’s that for a Mind-bender?

Speaking of awesome powers, what happens when you add a child to an element with a shadow DOM subtree? Let’s experiment:

// Create an element with a shadow DOM subtree.
var input = document.body.appendChild(document.createElement('input'));
// Add a child to it.
var test = input.appendChild(document.createElement('p'));
// .. with some text.
test.textContent = 'Team Edward';
 

Displaying as:

Input Element and Nothing Else

Whoa. Welcome to the twilight DOM — a chunk of document that’s accessible by traversal but not rendered on the page. Is it useful? Not very. But it’s there for you, if you need it. Teens seem to like it.

But what if we did have the ability to show element’s children as part of its shadow DOM subtree? Think of the shadow DOM as a template with a hole, through which the element’s children peek:

// WARNING: Pseudocode, not a real API.
var element = document.getElementById('element');
// Create a shadow subtree.
element.shadow = document.createElement('div');
element.shadow.innerHTML = '<h1>Think of the Children</h1>' +
    <div class="children">{{children-go-here}}</div>';
// Now add some children.
var test = element.appendChild(document.createElement('p'));
test.textContent = 'I see the light!';
 

As a result, if you traverse the DOM you will see this:

<div id="element">
    <p>I see the light</p>
</div>
 

But it will render like this:

<div id="element">
    <div> <!-- shadow tree begins -->
        <h1>Think of the Children</h1>
        <div class="children"> <!-- shadow tree hole begins -->
            <p>I see the light</p>
        </div> <!-- shadow tree hole ends -->
    </div> <!-- shadow tree ends --> 
</div>
 

As you add children to element, they act as normal children if you look at them from the DOM, but rendering-wise, they are teleported into a hole in the shadow DOM subtree.

This is the point where you admit that this is pretty cool and start asking:

When can I have it in my browser?

Homework Assignment

Did you think you’d read through all this preaching and get away without homework? As a Javascript library or framework developer, try to think of all the different great things having shadow DOM would allow you to do. Then think of specific use cases (actual/pseudo code a plus) of where shadow DOM could be applied. To help you get your thinking groove going, here is current list of use cases.

Finally. share your use cases on public-webapps mailing list. The discussion about adding these capabilities to the Web platform is under way and your help is needed.

If you aren’t a much a framework writer, you can still participate — by cheering for the shadow DOM and spreading the joy on your favorite social networking site. Because joy is what’s it’s all about.

PS. SVG and Shadow DOM

Almost forgot. Believe it or not, SVG has actually had shadow DOM since the beginning. The trouble is, its shadow DOM is very… shady. No, that’s not it. There’s another qualifier that also begins with “sh” and ends with a “y”. Yeah, that one. I could go on, but trust me on this. Or read the spec.

Written by Dimitri Glazkov

January 14, 2011 at 4:40 pm

Posted in WebKit

Tagged with , , ,

From HTML5 to Gibson’s Matrix?

with 2 comments

I shouldn’t admit it, but — what the heck. I haven’t read the Sprawl Trilogy. Until this weekend. After falling prey to another round of the seasonal crud, and with the long Memorial Day weekend in sight, I dove in.

The books aged extremely well. It was too easy to ignore the awkward artifacts of the 80′s culture and go with the smooth and intricate flow. It felt just right. Not the mind-boggling, monolithic Stephenson’s universe that pounds you with all its kilo-page might. It was gentler and more focused on the characters, rather than the surrounding gadgetry and sound reasoning behind its existence.

Anywho. I walked away inspired. The Sprawl was a tantalizing illusion, my brain spinning in a vertigo of subconscious attempts to fill in the missing engineering details. But in riding this high, I also felt disappointment. It’s 2009, for crying outloud. Where are the AIs? I mean those that can reasonably fool a Turing test? Where are the consoles that connect you directly to the full sensory representation of the Internet? And hovercrafts?! I want my frickin hovercrafts!

How is it that we are still tinkering with a 10-year old hypertext format, asymptotically trying to make it work right on our respective rendering engines, bolting new steel plates on the old wooden boat as it creaks, sinking further under the weight of our add-ons? How come there’s a whole echelon of computer industry burning midnight oil congealing bits of CSS, HTML, and JS into the scary, scary nonsensical frankensteins that we call Web sites? And how come it is so hard to build and grow these sites — not to mention use them?

Where have we gone wrong? Perhaps it was the naive notion that HTML wasn’t just an accidental leader of the rising wave, that it was somehow special, because it was just like text and thus “easy of use?” I shudder even typing these three words. Or was it the idea that the Web is what it is and shouldn’t break it? Or maybe it was us, proclaiming that content is king and that the vehicle didn’t matter? We’ve become content with what we’ve got. Even new aspirations, however alien, look suspiciously like the same old stuff.

But yes, we are where we are. Otherwise, we wouldn’t be anywhere. Right? Riiight. We have this spec, called HTML5 and we’re trying to make it work. It’s better than what we have today, and it is a step forward. But on the big scheme of things — is this what we need? Small, painful incremental steps on the burning coals of the Web as we know it? Is this taking us somewhere? And if it is, are we there yet?

Are we there yet?

Written by Dimitri Glazkov

May 25, 2009 at 3:46 pm

Posted in Personal, Rant

Tagged with , , , , ,

Lucky Chrome

with 8 comments

How can you explain this sheer amount of luck? Are there any special terms for it? I have no clue.

To illustrate — this year, yours truly:

  • After long 14 years finally got his green card
  • Moved to California
  • Landed a job at Google
  • … on the Google Chrome team!
As my good friend put it: “Wear your seatbealt. You have used up all of your luck”.

Written by Dimitri Glazkov

September 3, 2008 at 11:19 am

Posted in Personal

Tagged with , , ,

Information Architecture Presentation at IPSA

with 2 comments

Today, I gave a small presentation on a large topic, “Information Architecture on a Large Scale” at IPSA (that’s Information Professionals Society of Alabama). Here are the slides:

And here are the notes:

Thank you guys for the warm reception! And if you have anything to add, comment, or critique, please leave a comment or two.

Update: here is a video, courtesy of the fearless Brit Mansell:

Written by Dimitri Glazkov

July 11, 2008 at 5:03 pm

Thank You, Birmingham

with 3 comments

It’s been a long time. I remember driving down I-59 for the first time and suddenly seeing you, lit up, silent and magical in the damp summer heat of 1995. I remember the shock of first encountering the true Southern talk at a McDonald’s drive-through and not being able to comprehend a word — or even make out a syllable. This was a whole different world. This was a whole different time.

Over the next 13 years, I learned lots of things. I learned that driving expensive German cars is not at all what I really want from my life. As a side effect, I learned how to get in debt up to my eyeballs and how to get out of it. I learned that parents will love you no matter what and that their hearts will bleed as they watch you making the stupidest mistakes on your path to comprehension of life.

I also learned that you can’t “fix” people or change them to your liking, no matter how hard you try. I learned that things will happen in most unpredictable ways and a beam of light would shine in the darkest of the night to reveal a new path. I learned what it means to be a family man and exactly how little sleep young fathers and mothers need to keep going.

Along the way, you were there for me. You cheered for my successes. You helped me deal with failures and consequences of poor choices. You taught me about serendipity, resilience, dedication and faith. And most of all, you taught me what it means to truly love someone.

You opened my eyes to the complexity and depth of the racial and cultural divide of this world and gave me hope that this divide can be overcome, even if one person at a time.

Thank you, Birmingham. Thanks for my friends, the opportunities, and the impeccable Southern hospitality. Thank you for your wisdom and willingness to embrace this quirky Russian.

On August 1, we part ways. True to your old-fashioned ways, you stay where you are. But a little part of you will move on with me and my family. Mountain View, here we come. Looking forward to meeting y’all out in California.

Written by Dimitri Glazkov

July 6, 2008 at 2:13 pm

Wrap Your Head Around Gears Workers

with one comment

Google I/O was a continuous, three-thousand-person mind meld, so I talked and listened to a lot of people last week. And more often than not I discovered that Gears is still mostly perceived as some wand that you can somehow wave to make things go offline. Nobody is quite sure knows how, but everyone is quite sure it’s magic.

It takes a bit of time to accept that Gears is not a bottled offlinifier for sites. It takes even more time to accept that it’s not even an end-user tool, or something that just shims between the browser and the server and somehow saves you from needing to rethink how you approach the Web to take it offline. That the primitives, offered by Gears, are an enabling technology that gives you the capability to make completely new things happen on the Web, and it is your, developer’s task to apply it to solve problems, specific to your Web application.

And it’s probably the hardest to accept that there is no one-size-fits-all solution to the problem of taking your application offline. Not just because the solution may vary depending on what your Web application does, but also because the actual definition of the problem may change from site to site. And pretty much any way you slice it, the offline problem is um, hard.

It’s not surprising then that all this thinking often leaves behind a pretty cool capability of Gears: the workers. Honestly, workers and worker pools are like the middle child of Gears. Everybody kind of knows about them, but they’re prone to be left behind in an airport during a family vacation. Seems a bit unfair, doesn’t it?

I missed the chance to see Steven Saviano‘s presentation on Google Docs, but during a hallway conversation, it appears that we share similar thoughts about Gears workers: it’s all about how you view them. The workers are not only for crunching heavy math in a separate thread, though that certainly is a good idea. The workers are also about boundaries and crossing them. With the cross-origin workers and the ability to make HTTP requests, it takes only a few mental steps to arrive at a much more useful pattern: the proxy worker.

Consider a simple scenario: your JavaScript application wants to consume content from another server (the vendor). The options are fairly limited at the moment — you either need a server-side proxy or use JSON(P). Neither solution is particularly neat, because the former puts undue burden on your server and the latter requires complete trust of another party.

Both approaches are frequently used today and mitigated by combinations of raw power or vendor’s karma. The upcoming cross-site XMLHttpRequest and its evil twin XDR will address this problem at the root, but neither is yet available in a released product. Even then, you are still responsible for parsing the content. Somewhere along the way, you are very likely to write some semblance of a bridge that translates HTTP requests and responses into methods and callbacks, digestible by your Web application.

This is where you, armed with the knowledge of the Gears API, should go: A-ha! Wouldn’t it be great if the vendor had a representative, who spoke JavaScript? We might just have a special sandbox for this fella, where it could respond to our requests, query the vendor, and pass messages back in. Yes, I am talking about a cross-origin worker that acts as a proxy between your Web application and the vendor.

As Steven points out at his talk (look for the sessions on YouTube soon — I saw cameras), another way to think of this relationship is the RPC model: the application and the vendor worker exchange messages that include procedure name, body, and perhaps even version and authentication information, if necessary.

Let’s imagine how it’ll work. The application sets up a message listener, loads the vendor worker, and sends out the welcome message (pretty much along the lines of the WorkerPool API Example):

// application.js:
var workerPool = google.gears.factory.create('beta.workerpool');
var vendorWorkerId;
// true when vendor and client both acknowledged each other
var engaged;
// set up application listener
workerPool.onmessage = function(a, b, message) {
  if (!message.sender != vendorWorkerId) {
    // not vendor, pass
    return;
  }
  if (!engaged) {
    if (message.text == 'READY') {
      engaged = true;
    }
    return;
  }
  processResponse(message);
}
vendorWorkerId = workerPool.createWorkerFromUrl(
                                'http://vendorsite.com/workers/vendor-api.js');
workerPool.sendMessage('WELCOME', vendorWorkerId);

As the vendor worker loads, it sets up its own listener, keeping an ear out for the WELCOME message, which is its way to hook up with the main worker:

// vendor-api.js:
var workerPool = google.gears.workerPool;
// allow being used across origin
workerPool.allowCrossOrigin();
var clientWorkerId;
// true when vendor and client acknowledged each other
var engaged;
// set up vendor listener
workerPool.onmessage = function(a, b, message) {
  if (!engaged) {
    if (message.text == 'WELCOME') {
      // handshake! now both parties know each other
      clientWorkerId = message.sender;
      workerPool.sendMessage('READY', clientWorkerId);
    }
    return;
  }
  // listen for requests
  processRequest(message);
}

As an aside, the vendor can also look at message.origin as an additional client validation measure, from simple are you on my subdomain checks to full-blown OAuth-style authorization schemes.

Once both application and the vendor worker acknowledge each other’s presence, the application can send request messages to the vendor worker and listen to responses. The vendor worker in turn listens to requests, communicates with the vendor server and sends the responses back to the server. Instead of being rooted in HTTP, the API now becomes a worker message exchange protocol. In which case the respective processing functions, processRequest and processResponse would be responsible for handling the interaction (caution, freehand pseudocoding here and elsewhere):

// vendor-api.js
function processRequest(message) {
  var o = toJson(message); // play safe here, ok?
  if (!o || !o.command) {
    // malformed message
    return;
  }
  switch(o.command)
    case 'public': // fetch all public entries
      // make a request to server, which fires specified callback on completion
      askServer('/api/feed/public', function(xhr) {
        var responseMessage = createResponseMessage('public', xhr);
        // send response back to the application
        workerPool.sendMessage(responseMessage, clientWorkerId);
      });
      break;
    // TODO: add more commands
  }
}

// application.js
function processResponse(message) {
  var o = toJson(message);
  if (!o || !o.command) {
    // malformed message
    return;
  }
  switch(o.command) {
    case 'public': // public entries received
      renderEntries(o.entries);
      break;
    // TODO: add more commands
  }
}

You could also wrap this into a more formal abstraction, such as the Pipe object that I developed for one of my Gears adventures.

Now the vendor goes, whoa! I have Gears. I don’t have to rely on dumb HTTP requests/responses. I can save a lot of bandwidth and speed things up by storing most current content in a local database, and only query for changes. And the fact that this worker continues to reside on my server allows me to continue improving it and offer new features, as long as the message exchange protocol remains compatible.

And so you and the vendor live happily ever after. But this is not the only happy ending to this story. In fact, you don’t even have to go to another server to employ the proxy model. The advantage of keeping your own server’s communication and synchronization plumbing in a worker is pretty evident once you realize that it doesn’t ever block UI and provides natural decoupling between what you’d consider the Model part of your application. You could have your application go offline and never realize it, because the proxy worker could handle both monitoring of the connection state and seamless switching between local storage and server data.

Well, this post is getting pretty long, and I am no Steve Yegge. Though there are still plenty of problems to solve (like gracefully degrading this proxy model to a non-Gears environment), I hope my rambling gave you some new ideas on how to employ the worker goodness in your applications and gave you enough excitement to at least give Gears a try.

Written by Dimitri Glazkov

June 2, 2008 at 7:38 pm

Posted in Uncategorized

.NET on Gears: A Tutorial

with 7 comments

The google-gears group gets a lot of questions from .NET developers. So I decided to help out. In this tutorial, we will build a simple ASP.NET timesheet entry application (because, you know, everybody loooves timesheet entry). Then, we’ll bolt on Google Gears to eliminate any excuses for not entering timesheets. Say, you’re sitting 15,000 feet above ground in cushy herd-class accommodations, with your laptop cracked about 60 degrees, keyboard pressed firmly against your chest, praying that the guy in front of you doesn’t recline. This is the point where Google Gears comes to save you from yet another round of Minesweeper. You fire up your browser, point it to your corporate Intranet’s timesheet entry page and — boom! — it comes right up. You enter the data, click submit and — boom! — the page takes it, prompting your aisle-mate to restart the search for the Ethernet socket in and around his tiny personal space. And after you cough up 10 bucks for the internet access at the swanky motel later that night, your browser re-syncs the timesheets, uploading the data entered offline to the server. Now, that’s what I call impressive. You can read more about features and benefits of Gears on their site later. Right now, we have work to do (if you’d like, you can download the entire project and follow along).

Step 1: Create ASP.NET Application

First, we start with the ASP.NET part of the application, which is largely drag-n-drop:

  • Create new database, named YourTimesheets.
  • In this database, create a table Entries, with the following fields (you can also run this script):
    • ID int, this will also be our identity field
    • StartDateTime datetime
    • DurationsMin int
    • Project nvarchar(100)
    • Billable bit
    • Comment ntext
  • In Visual Studio 2005, create an ASP.NET project, also named YourTimesheets.
  • In design mode, drag SqlDataSource from Toolbox bar onto the Default.aspx file.
  • Specify the connection string using SqlDataSource designer, pointing to the newly created database (figure 1). Save the connection string in Web.config (figure 2).
  • Then, drag GridView onto the same design surface and connect it to the data source (figure 3 and figure 4).
  • Now, the input form. I added mine by hand, by you can use the same drag from Toolbox technique to get yours (figure 5).
  • Hook up the form fields and SqlDataSource INSERT query (figure 6). You don’t have to do anything but point and click here.
  • Finally, go to the Default.aspx.cs code-behind file and make sure that the Postback saves form data by typing (gasp!) a C# statement (listing 1). Notice that I also added a self-redirect there to make sure that the page is always rendered as a result of a GET request. This may seem odd to you, but how many times did you hit Refresh on your browser and saw the woo wee! let’s repost the data you just entered, again! message? This one-liner prevents that situation.
  • At this point, you have a fully functional ASP.NET timesheet entry application, but let’s go ahead and add some styling to it (figure 7 and listing 2).

It is probably worth mentioning that no amount of styling will fix some obvious usability problems with the data entry in this particular piece of user interface, but hey, I didn’t call this article ASP.NET on Gears: A Production-ready Application, right?

Step 2: Develop Gears Application Concept

Moving on to Gears stuff. This part of the show contains graphic hand-coding and conceptual thinking that may not be appropriate for those who build their stuff using the Toolbox bar and Design View. People who are allergic to Javascript should ask their doctor before taking this product. Just kidding! You’ll love it, you’ll see … I think. For this tutorial, I chose to use the 0.2.2.0 build of Gears, which is not yet a production build, but from what I heard will be shortly. This build offers a quite a bit more functionality for workers, such as HttpRequest and Timer modules, and as you’ll see shortly, we’ll need them in this application. Let’s first figure out how this thing will work. When connected (online), the application should behave as if Gears weren’t bolted on: entry submissions go directly to the server. Once the connection is severed (application goes offline), we can use LocalServer to serve application resources so that the page still comes up. Obviously, at this point we should intercept form submission to prevent the application from performing a POST request (those are always passed through the LocalServer). As we intercept the submission, we put the submitted data into a Database table. Then, when back online, we replay the submissions back to the server asynchronously, using WorkerPool and HttpRequest, reading from the Database table. Speaking of back online, we’ll need some way to detect the state of the application. We’ll do this by setting up a WorkerPool worker, making periodic HttpRequest calls to a URL that’s not registered with the LocalServer. When request fails, we deem the state to be offline. When request succeeds, we presume that things are online. Simple enough? To keep our dear user aware of what’s going on, we’ll need to do quite a bit of DOM manipulation. No, not that Dom. This DOM. For instance, the data, entered offline should be displayed for the user in a separate, clearly labeled table. We will also need to know of events like the user attempting to submit the form, so that we could intercept the submission and stuff it into Database. Oh, and there’s one more thing. Since we build this application to operate both offline and online, we can’t rely on server-based validation. For this task, I chose to write my own client-side validation, but you can try and tinker with the standard ASP.NET 2.0 validation controls and the crud they inject in your document. To summarize, we need the following components (let’s go ahead and name them, because naming things is fun):

  • Database, to write and read entries, entered offline.
  • DOM, to intercept submits, changes of input values, writing offline entries table, and other things that involve, well, DOM manipulation.
  • Monitor, to poll server and detect when the application becomes offline and online.
  • Store, to keep track of the resources that will be handled by LocalServer when application is offline.
  • Sync, to relay submitted offline data back to the server.
  • Validator, to ensure that the field data is valid before it’s submitted, whether online or offline.

Step 3: Define Interfaces

Piece of cake! The only thing left is writing some code. Perhaps we should start with defining how these pieces of the puzzle will interact. To keep code digestible and easy to hack on (it’s a tutorial, right?), we will make sure that these interactions are clearly defined. To do that, let’s agree on a couple of rules:

  • Each component exposes a consistent way to interact with it
  • A component may not call another component directly

It’s like an old breadboard from your science club tinkering days: each component is embedded into a block of non-conductive resin, with only inputs and outputs exposed. You plug the components into a breadboard and build the product by wiring those inputs and outputs (figure 8). In our case, since our components are Javascript objects, we’ll define an input as any Javascript object member, and an output as an onsomethinghappened handler, typical for DOM0 interfaces. And here we go, starting with the Database object, in the order of the alphabet:


// encapsulates working with Gears Database module
// model
function Database() {

    // removes all entries from the model
    this.clear = function() {}

    // opens and initializes the model
    // returns : Boolean, true if successful, false otherwise
    this.open = function() {}

    // reads entries and writes them into the supplied writer object
    // the writer object must have three methods:
    // open() -- called before reading begins
    // write(r, i, nextCallback) -- write entry, where:
    // r : Array of entry fields
    // i : Number current entry index (0-based)
    // nextCallback : callback function, which must be called
    // after the entry is written
    // close() -- called after reading has completed
    this.readEntries = function(writer) {}

    // writes new entry
    // params : Array of entry fields (StartDateTime, DurationMins,
    // Project, Billable, Comment, FormData)
    this.writeEntry = function(params) {}
}

It’s worth noting that the readEntries method mimics the archetypical Writer and asynchronous call patterns from the .NET framework. I hope you’ll think of them as the familiar faces in this crowd. The DOM component has the most ins and outs, primarily because, well, we do a lot of things with the browser DOM:


// encapsulates DOM manipulation and events
// view
function DOM() {

    // called when the browser DOM is ready to be worked with
    this.onready = function() {}

    // called when one of the inputs changes. Sends as parameters:
    // type : String, type of the input
    // value : String, value of the input
    this.oninputchange = function(type, value) {}

    // called when the form is submitted.
    // if it returns Boolean : false, the submission is cancelled
    // submission proceeds, otherwise
    this.onsubmit = function() {}

    // hooks up DOM event handlers
    this.init = function() {}

    // loads (or reloads) entries, entered offline by creating
    // and populating a table just above the regular timesheets table
    // has the same signature as the writer parameter of the
    // Database.readEntries(writer)... because that's what it's being
    // used by
    this.offlineTableWriter = {
    open: function() {},
    write: function(r, i, nextCallback) {},
    close: function() {}
    }

    // provides capability to show an error or info message. Takes:
    // type : String, either 'error' or 'info' to indicate the type of
    // the message
    // text : String, text of the message message
    this.indicate = function(type, text) {}

    // grabs relevant input values from the form inputs
    // returns : Array of parameters, coincidentally in exactly the
    // format that Database.writeEntry needs
    this.collectFieldValues = function() {}

    // returns : String, URL that is set in of the form action attribute
    this.getPostbackUrl = function() {}

    // removes a row from the offline table. Takes:
    // id : String, id of the entry
    this.removeRow = function(id) {}

    // remove the entire offline table
    this.removeOfflineTable = function() {}

    // enable or disable submit. Takes:
    // enable : Boolean, true to enable submit button, false to disable
    this.setSubmitEnabled = function(enable) {}

    // iterate through fields and initialize field values, according to type
    // Takes:
    // action : Function, which is given:
    // type : String, the type of the input
    // and expected to return : String, a good initial value
    this.initFields = function(action) {}
}

Monitor has a rather simple interface: start me and I’ll tell you when the connection changes:


// provides connection monitoring
// controller
function Monitor() {

    // triggered when connection changes
    // sends as parameter:
    // online : Boolean, true if connection became available,
    // false if connection is broken
    this.onconnectionchange = function(online) {};

    // starts the monitoring
    this.start = function() {}
}

Is this a simplicity competition? Because then Store takes the prize:


// encapsulates dealing with LocalServer
// model
function Store() {

    // opens store and captures application assets if not captured already
    // returns : Boolean, true if LocalServer and ResourceStore
    // instance are successfully created, false otherwise

    this.open = function() {}
    // forces refresh of the ResourceStore
    this.refresh = function() {}
}

Synchronization algorithm in this tutorial is exceedingly simple, we basically just start it and wait for it to complete. As each entry is uploaded, the Sync component reports it, so that we could adjust our presentation accordingly:


// synchronizes (in a very primitive way) any entries collected offline
// with the database on the server by replaying form submissions
function Sync() {

    // called when a synchronization error has occured. Sends:
    // message : String, the message of the error
    this.onerror = function(message) {}

    // called when the synchronization is complete.
    this.oncomplete = function() {}

    // called when an entry was uploaded to the server. Sends:
    // id : String, the rowid of the entry
    this.onentryuploaded = function(id) {}

    // starts synchronization. Takes:
    // url : String, the url to which to replay POST requests
    this.start = function(url) {}
}

Finally, the Validator. It’s responsible both for providing good initial values for the form, as well as making sure the user is entering something legible.


// encapsulates validation of values by type
function Validator() {

    // provides good initial value, given a type. Takes:
    // type : String, the type of the input, like 'datetime' or
    // 'number'
    // returns : String, initial value
    this.seedGoodValue = function(type) {}

    // validates a value of a specified type. Takes:
    // type : String, the type of the input.
    // value : String, value to validate
    // returns : Boolean, true if value is valid, false otherwise
    this.isValid = function(type, value) {}
}

Whew! Are we there yet? Almost.

Step 4: Write Code

This is where we pull up our sleeves and get to work. There’s probably no reason to offer a play-by-play on the actual process of coding, but here are a couple of things worth mentioning:

  • Javascript is a dynamic, loosely-typed language. Enjoy it. Don’t square yourself into compile-time thinking. This is funk, not philharmonic.
  • Javascript is single-threaded. The trick that you might have learned with 0-timeout doesn’t actually start a new thread. It just waits for its opportunity to get back on the main thread.
  • Gears workers, on the other hand, are truly multi-threaded. There is some pretty neat plumbing under the hood that sorts out this dichotomy by queueing the messages, and you might want to be aware of that when writing the code. For instance, calling main thread with UI operations from a worker doesn’t make them asynchronous: the message handlers will still line up and wait for their turn. So, if your worker does a lot of waiting on the main thread, you may not see as much benefit from using the worker pools.
  • Gears currently lack a database or resource store management console with slick user interface (hint: you should perhaps join the project and lend a hand with that). But dbquery and webcachetool are good enough. For this project, I cooked up asmall pagethat, upon loading, blows away all known state of the application, and that was pretty handy in development (listing 3).
  • There is a very simple way to simulate offline state on your local machine. It’s called iisreset. From command line, run iisreset /stop to stop your Web server and you’ll have a perfect simulation of a broken connection. Run iisreset /start to get the application back online.

Armed with these Gear-ly pearls of wisdom, you jump fearlessly on the interfaces above and get coding. Or… you can just see how I’ve done it (listing 3

).

Step 5: Feed the Monkey

Feed the monkey? Wha… ?! Just wondering if you’re still paying attention. Technically, we’re done here. The application is working (to see for yourself, download the screencast or watch it all fuzzy on YouTube). As you may have gleaned from our coding adventure, Google Gears offers opportunities that weren’t available to front-end developers before: to build Web applications that work offline or with occasionally-available connection, to add real multi-threading to Javascript, and much more. What’s cool is that Gears are already available on many platforms and browsers (including Internet Explorer), and the list is growing quickly. Perhaps PC World is onto something, calling it the most innovative product of 2007. But don’t listen to me: I am a confessed Gearhead. Try it for yourself.

Written by Dimitri Glazkov

January 31, 2008 at 8:30 pm

Posted in Uncategorized

Follow

Get every new post delivered to your Inbox.