Monday, May 20, 2013

Less is more - s/blog/twitter/g

I haven't found enough time to blog lately, so I've decided to move over to Twitter. Whatever I have to say can now be found on Twitter @jsdelfino.

Sunday, August 26, 2012

Fantastic America's Cup World Series in San Francisco - The best sailors, the fastest boats...

Fantastic America's Cup world series regatta this week in the San Francisco Bay. The best sailors, the fastest boats, in the best place in the world for sailing.

That was just a warm-up to test the waters before the America's Cup finals on the real deal 72 foot boats next summer... but what a warm-up!!!

 It's all on YouTube here:

and much more, including replays of this week's races there.

Next event: same place, same boats, Oct 02-07.

Too bad my sailor son was not here to see this as he's on vacation in France, but hey, you can't have everything... Enjoy your vacation :)

Sail fun, Sail fast!

Saturday, August 11, 2012

Autoconf and Automake on Mac OS X Mountain Lion

It's summer and I've been thinking about blogging more regularly again. I'm usually too busy to find time to blog, so I'm going to try a shorter format this time: sort of middle ground between a tweet and a full blown blog.

So, here's my first entry in that shorter format.

If you're upgrading to Mac OS X Mountain Lion and Xcode 4.4.1, you'll find that Xcode does not include anymore the GNU Autoconf, Automake and Libtool build tools used by most open source projects to generate makefiles and dynamic libraries... That's not so great :(

I wanted to share what I did to build them myself from source, as it could help others too:

export build=~/devtools # or wherever you'd like to build
mkdir -p $build

cd $build
curl -OL
tar xzf autoconf-2.68.tar.gz
cd autoconf-2.68
./configure --prefix=$build/autotools-bin
make install
export PATH=$PATH:$build/autotools-bin/bin

cd $build
curl -OL
tar xzf automake-1.11.tar.gz
cd automake-1.11
./configure --prefix=$build/autotools-bin
make install

cd $build
curl -OL
tar xzf libtool-2.4.tar.gz
cd libtool-2.4
./configure --prefix=$build/autotools-bin
make install

Notice how I configured to install in --prefix=$build/autotools-bin? You can also omit that to install the tools in your system dirs if you want. I usually install what I build under my own user dir to avoid polluting the system dirs, but it's really your choice.

Hope this helps

Monday, May 7, 2012

New Exciting Human Computer Interfaces from Disney and Microsoft

Each year the ACM SIGCHI Conference on Human Factors in Computing Systems unveils exciting advances in Human-Computer Interfaces.

This year Disney Research will present its new Touché interface. Touché turns everyday objects into multi-touch, gesture-recognizing interfaces. It only requires minimal instrumentation of the objects, with just a single small electrode.

Here's a demo:

Technical details are available in their research paper: Touché - Enhancing Touch Interaction on Humans, Screens, Liquids, and Everyday Objects.

Microsoft Research will present SoundWave. SoundWave is a Kinect-like system that uses your computer’s built-in microphone and speakers to provide object detection and sense gestures using the Doppler effect, similar to how a submarine uses a sonar.

Here's a demo:

Technical details are available in their research paper: SoundWave - Using the Doppler Effect to Sense Gestures.

Exciting times! I wonder what it would take to get these new technologies working on everybody's smartphone...

Monday, April 23, 2012

Comparing Broadband Internet Service Providers

It looks like ATT is starting to offer Fiber Optics Internet in my area. I've been pretty happy with Comcast Cable Internet but I thought I'd check it out anyway.

There are many unreliable and biased ISP comparisons out there but I came across an interesting report from the Federal Communication Commission: "Measuring Broadband America - A Report on Consumer Wireline Broadband Performance in the U.S.".

The report is available there. It compares Internet Service Providers using various criterias, including actual vs advertised performance. If you're shopping for a Broadband ISP, take a look and draw your own conclusions...

For now I'm staying with Comcast Cable Internet. I will check again when/if Verizon FiOS ever becomes available here.

Monday, April 9, 2012

The Instagram Architecture - How they're scaling their data storage layer

A brief overview of the Instagram Architecture (just acquired by Facebook for $1B) on the High Scalability blog, based on an earlier post on the Instagram Engineering blog.

It's interesting to see how they're scaling their data storage layer using 12 PostgreSQL databases in a master-replica setup with streaming replication, PostgreSQL schemas for sharding, Skype's PgBouncer to pool database connections, and vmtouch to load disk pages in memory.

Tuesday, March 6, 2012

Pictures From Last Weekend

After my last blog post a few folks asked me what else I'm doing on weekends besides watching software geek videos :) Yeah I do that, and I also hack on some open source projects to free my mind from the week's routine, but I just wanted to reassure everybody that I also have a real life, with some pictures from last weekend.

Saturday - Squaw Valley
I had initially planned some ski races (slalom on Saturday, GS on Sunday) but preferred to do a family ski day instead. We had a great time! good snow, amazing weather and stunning views of Lake Tahoe as usual.

After last week's snowfall, Squaw was a little crowded, and I'm glad we drove up early in the morning so we didn't have to park like this:

Sunday morning - Breakfast in the backyard with a friend
Deers come to our backyard all the time as some of it is wild and not all fenced. Before moving here I never imagined that Silicon Valley was in the country!

Sunday afternoon - Pacific Ocean, San Gregorio State Beach
Picnic time, relaxing and enjoying the view. Isn't that beautiful? Where else in the world can you ski on Saturday and go to the beach on Sunday?

Simple vs Easy, or How Going Easy Creates Complexity

Last weekend I watched an entertaining presentation of the key differences between Simple and Easy from Rich Hickey (the inventor of Clojure).

I had already watched it last year, and found it enlightening again! In a nutshell, Rich Hickey argues that:

Simple is the opposite of complex and means one fold, one braid, one pure concept or dimension not polluted or interleaved with unnecessary aspects. Simple is an objective measurable quality of the end product, but simplicity, purity and design elegance are hard to achieve.

Easy is the opposite of hard and means near your capabilities, familiar, and requiring no effort or reflection. Easiness is relative and depends on who's building the software, as something easy for you may not be easy for me.

Unfortunately, many software projects choose to go Easy (cheap) vs Simple (hard and requiring to think harder up front). These projects create incidental complexity -- or just a mess -- by using tools and constructs that are not right for the job and drag unnecessary constraints and dimensions into the end product.

Sounds familiar?

One of the things I love with open source development is that you can't choose the easy route. If you sacrifice simplicity by choosing the easy route in the open, in public, there'll be no place to hide it. Be assured that someone will get on your project mailing list and comment on the mess you've created or, even better, he'll make the effort to think harder and show the world a simpler alternative to what you've done.

Could it be one of the reasons why open source wins?

Monday, February 27, 2012

Happy 17th Birthday Apache - Version 2.4 Ideal For Cloud Environments

The Apache Software Foundation celebrates the 17th Anniversary of the Apache HTTP Server with the release of version 2.4.

The Apache HTTP server is is the world's most popular Web Server, powering nearly 400 million Web sites across the globe.

I played around with version 2.4 over the weekend. It brings numerous enhancements, making Apache ideally suited for cloud environments, including:
  • lower resource utilization, better concurrency and async I/O support, bringing performance on par, or better, than pure event-driven Web servers like Nginx;
  • dynamic reverse proxy configuration;
  • more granular timeout and rate/resource limiting capability;
  • more finely-tuned caching support, tailored for high traffic servers and proxies.

More details on the Apache Software Foundation blog and a list of all the new features in the Apache HTTP server documentation.

Thursday, February 16, 2012

Tumblr Architecture - 15 Billion Page Views A Month

Interesting entry on the High Scalability blog describing the Tumblr architecture and how they're scaling to 15 billion pages a month.

A few points caught my eye:
  • Confirmation that MySQL scales just fine with sharding. More details on their sharding implementation here.
    With all the buzz around NoSQL I think people are underestimating good old SQL databases like MySQL or PostgreSQL for example.
  • Confirmation that Redis is just great.
  • Assigning users to Cells helps handle the combinatorial explosion of users x followers x posts.
    To draw an analogy with Ethernet networking, this reminds me of how you can segment a LAN using network switches to reduce bandwidth usage and congestion (as some of the traffic will stay within each cell / segment).

Tuesday, January 31, 2012

We Really Don't Know How To Compute!

One of my new year resolutions was to blog more. It's not working out yet, as I've been too busy the last few weeks. It's already Jan 31 and this is only my second blog entry this year.

I recently came across a fascinating presentation from Gerald Jay Sussman, co-author of the famous MIT computer science text book 'Structure and Interpretation of Computer Programs' and co-inventor of the Scheme programming language.

He claims that we really don't know how to compute, compares computer programs (constrained to rigid designs and difficult to adjust to a new situation) to living organisms (which can be reconfigured to implement new ways to solve a problem) and makes a convincing argument that we need drastically different programming models to approach that level of flexibility.

He then introduces the Propagator Programming Model (work supported in part by the MIT Mind Machine project). A propagator program is built as a network connecting cells and propagators. Cells collect and accumulate information. Propagators are autonomous machines which continuously examine some cells, perform computations on the information from these cells and add the results to other cells.

A propagator program is analogous to an electrical wiring diagram. To extend it and add a new way to approach a problem, you simply add and connect new propagators. Your cells now collect alternate results from different propagators, and you can then decide to merge redundant results, combine partial results, or even exclude contradictory results when some propagators do not work well in a new situation.

This is similar to how human beings resolve problems. We try several approaches, weigh and combine their results, then wire up our brain with the approaches that work well for the next time we face a similar situation.

I couldn't help but see some relation between that propagator model and my recent interests in computer programming models.

Massively Parallel programming
A propagator program is naturally parallel. Each propagator is continually watching its neighbor cells and computing new results as their values change, autonomously and in parallel with other parts of the program.

Functional programming
A propagator is like a pure function that computes results only from its inputs. A result can also be wrapped in a monad to provide information about its premises, relevance or correctness (useful to pick or combine partial results as they accumulate in a cell for example).

Web Component Assembly
The wiring diagram describing a propagator program seems to map really well to an SCA (Service Component Architecture) component assembly wiring diagram. A propagator could easily be realized as a stateless Web component providing a computation service. A cell could be realized as a Web resource accumulating and storing data.

The propagator model also seems like a great candidate to represent programming expressions as networks of connected components, a subject I researched a bit last year, but which would be too long to describe here... perhaps in another blog post.

Anyway, that got me thinking about a fun weekend project. If I find the time, I'd like to do a little hacking and experiment with implementing a propagator program as an assembly of SCA components wired together.

How about defining two new cell and propagator SCA component types, perhaps with REST interfaces to allow propagator programs to live on the Web and play with data from some useful REST services out there?

Wouldn't that be fun?

Wednesday, January 4, 2012

Happy New Year 2012!

Happy New Year 2012 from the Delfino Family!

January 2nd, a great day in Squaw Valley: shopping for the girls, free skiing for my son, slalom gate training for me.

Just got back to work today after a long vacation. Still trying to figure out the year 2012 resolutions.

We wish you all a happy new year 2012!

Tuesday, November 22, 2011

Friday, November 18, 2011

America's Cup World Series in San Diego - Streaming Live

Don't miss the America's Cup World Series in San Diego this weekend, also streaming live on the YouTube America's Cup channel!

My son was lucky to see them last week as they were already there, and he was in San Diego for the weekend for the Perry Junior Sailing regatta.

Here's a summary of today's semi-finals:

Tomorrow at 1PM PST the finals, Oracle Racing vs the Energy Team from France (Yay!) will be streamed live again.

Wednesday, November 16, 2011

Apache HTTP Server 2.3.15 released!

Back to blogging after a break from it as I was too busy the last few days... with some great news!

The Apache Software Foundation and the Apache HTTP Server Project have announced the release of Version 2.3.15-beta of the Apache HTTP Server (also known as "Apache" or "HTTPD"). This version is the 4th, and likely final, beta release before the general availability of the 2.4 release, with lots of interesting new features, described there.

The following will be particularly useful if you're using HTTPD server to run apps in the cloud:
  • Rate limiting and request timeout control, to protect your server against misbehaving clients;
  • Improvements to the HTTP proxy, in particular with clustering, load balancing, and failover;
  • A better multi-threaded processing module (called Event MPM) capable of handling more HTTP connections, typically kept alive and open between requests by Web browsers, with less threads;
  • Performance improvements, focusing on latency and request / response turnover.

So, with HTTPD 2.3.15 you get a more robust and faster HTTP server with load balancing for horizontal scaling in the cloud! I've been playing with the 2.3.* code for some time, and it has been working really well for me. It'll be great to finally get a 2.4 GA release with all these new cool features! Hopefully soon...

For more info, config examples and performance data, see this presentation from Jim Jagielski.

Saturday, October 29, 2011

Integrating Google Page Speed in your Web site build

Google Page Speed is a great tool to help analyze and optimize the performance of your Web pages. It's actually a family of tools:

- Page Speed Chrome and Firefox browser extensions complement Firebug and analyze the performance of your Web pages right from your Web browser.

- Page Speed Online analyzes the performance of your Web pages too without requiring a browser extension.

- Mod_pagespeed is an Apache HTTPD extension module that optimizes, rewrites and minifies your Web pages on the fly.

- The Page Speed Service is a proxy hosted by Google which optimizes and delivers your Web pages from Google's servers.

- The Page Speed SDK allows you to embed the Page Speed analyzers and optimizers in your own project.

If you're paranoid (don't want your Web site to depend on Google's Page Speed servers) and CPU + memory conscious (don't want to spend resources / $$$ running mod_pagespeed on your HTTP server) you can also run Page Speed on your pages ahead of time when you build your Web site.

It's pretty simple. Here's how I'm doing it on Linux using GNU Automake and Apache HTTPD (I'm sure that can be adapted to other environments):

1. Download and build the Page Speed SDK, like this:
curl -OL
cd page-speed-1.9
make builddir=$build/page-speed-1.9-bin

That'll build two useful command line tools: minify_html_bin, an HTML rewriter / minifier which also minifies inline CSS and Javascript, and jsmin_bin, a Javascript minifier which also works well for CSS.

2. Write the following in your Automake file:
minified = htdocs/index-min.html htdocs/foo-min.js htdocs/bar-min.css

sampledir = ${prefix}
nobase_dist_sample_DATA = ${minified}

SUFFIXES = -min.html -min.js -min.css
    ../page-speed-1.9-bin/minify_html_bin $< $@

    ../page-speed-1.9-bin/jsmin_min < $< > $@

    ../page-speed-1.9-bin/jsmin_min < $< > $@

CLEANFILES = ${minified}

I won't go into the Automake specifics now but these rules will run the Page Speed minifiers on your Javascript, CSS and HTML pages as part of your Make build. I'm using a simple naming convention for the minified files, index-min.html is the minified version of index.html.

3. Add a DirectoryIndex directive to your Apache HTTPD httpd.conf config file:
DirectoryIndex index-min.html index.html

That tells HTTPD to serve /index-min.html (the minified page) instead of the original /index.html page when a user points his browser to

4. Reference foo-min.js and bar-min.css in your HTML pages instead of foo.js or bar.css for example.

5. After making changes to your Web pages, build your site like this:

If you had /index.html, foo.js, and bar.css, you should now have /index-min.html, foo-min.js, bar-min.css, all nicely optimized, rewritten and minified by Page Speed.

To clean up the minified files, run:
make clean

That's it. That little trick should normally save you from 30% to 50% bandwidth, CPU, memory and disk space on the client devices that access your site (particularly useful with resource-constrained mobile devices) and on your HTTP server too (which now serves smaller files).

Hope this helps

Tuesday, October 25, 2011

Recursive Functions of Symbolic Expressions and their Computation by Machine

'Recursive Functions of Symbolic Expressions and their Computation by Machine' is the original paper on Lisp from John Mc Carthy. That paper appeared in Communications of the ACM in April 1960. It is a great short read.

John Mc Carthy passed away yesterday. After Steve Jobs and Dennis Ritchie earlier this month, October has been a sad month for the computing community...

Another good article from Paul Graham, 'The Roots of Lisp' helps explain John Mc Carthy's discoveries with Lisp.

If you don't know Lisp, you should learn it.

I had a brief look at it 15 years ago, but was too impatient and unexperimented to get the point then. I took more time to actually learn it on a summer vacation 2 years ago, then went on to learn Scheme (a Lisp dialect) as I was going through the Structure and Interpretation of Computer Programs classic Computer Science book from MIT.

Believe me. Lisp (or Scheme) will transform you into a different programmer. After that, even if you still have to program in C, C++, Python, or even Java, you're in a different world. You'll see programming through different eyes, and there will be no going back.

Think about a few fun facts... Pretty much everything you manipulate in Lisp is a list of values (Lisp stands for 'list processing'). Values can be numbers, strings, symbols or lists. Data and code are interchangeable and represented in the same way, as lists of values. You can write a Lisp interpreter in half a page of Lisp!

So, if you're working with 'modern' OO programming languages, scripting languages, or functional programming languages, if you're using XML dialects, JSON, DSLs, and you're still struggling to map between documents, data, objects, services, code and configuration... spend the time to learn Lisp. It'll open your eyes.

Thursday, October 20, 2011

Caching Images in a Mobile Web app

Another post on a technique to help make mobile Web apps work offline.

Last time I showed how to use XMLHttpRequest and local storage to download and cache Javascript scripts. Here I'm going to show how to download and cache an image, and then inject it dynamically into your HTML page.

That technique can be particularly useful to optimize an image intensive app (a game or a photo gallery app for example) and allow it to work offline.

First place base64-encoded (as defined in Internet RFC 3548) versions of your images on your Web server. On Mac OS X for example, convert foo.png to a base64-encoded foo.b64 file like this:
$ base64 -w 0 foo.png >foo.b64

Configure your Web server to serve .b64 files as text/plain. If you're using the Apache httpd 2.x Web server -- and if you're not using it, you should :) -- add the following to your httpd.conf server configuration file:
text/plain b64

The reason why we want to convert our images to base64-encoded is that it'll make it much easier to use them in the HTML5 app pages. We're getting to that part now.

Place the following Javascript script in your HTML page, under the <head> element for example:
<script type="text/javascript">
window.appcache = {};

// Get a resource
window.appcache.get = function(url) {
  var doc = localStorage.getItem(url);
  if (doc != null)
    return doc;
  var http = new XMLHttpRequest();"GET", url, false);
  if (http.status != 200)
    return null;
  localStorage.setItem(url, http.responseText);
  return http.responseText;

// Convert an image to a data: URL
window.appcache.img = function(url) {
  var b64 = window.appcache.get(url);
  return 'data:image/png;base64,' + b64;

I'm not showing the error handling code to keep it short, but you get the picture. The get(url) function downloads a resource and caches it in local storage. The img(url) function gets an image resource and converts it to a data: URL. A data: URL (as defined in Internet RFC 2397) allows you to include resource data (here our base64-encoded image data) immediately in the URL itself.

Now, later in your HTML page, say you have an <img/> element like this:
<img id="foo"/>

You can set the cached image into it dynamically, like this:
<script type="text/javascript">
var foo = document.getElementById('foo');
foo.src = window.appcache.img('/foo.b64');

The image src property will recognize the data: URL, the image/png;base64 media type at the beginning of the URL, and read the base64-encoded content as a PNG image.

That's it. With the little trick I've described here you should now be able to:
  • control the download of your images;
  • cache them in local storage for speed and working offline;
  • inject them dynamically into your HTML page as needed.

Hope this helps. In the next few days I'll show how to handle CSS style sheets in a similar way.

Sunday, October 16, 2011

Game Industry Forecast - Rapid Growth and a Changing Market

Interesting study and forecast on the game industry there, as Zynga is preparing for IPO and announced 10 new products this week.

The global game industry will generate $60 billions in revenue for 2011. It's not a surprise, but gaming is the only media business growing right now with a rapid growth driven by mobile games, as games are the leading apps on smartphones and tablets.

The game market seems to be changing rapidly from hardcore gamers (mostly teenage boys) to casual gamers with older households (35+yr old), and women (42% of the gamers now), as browsers and mobile are bringing new populations to casual and social games.

The game technical platforms are also shifting quickly to the Web browser. With technologies like HTML5, Canvas, SVG, and WebGL for example, Web browsers have become the most convenient platform for games, and it's now clear that games in browsers are the future. See this interview of the founder of Electronic Arts for more insight and his thoughts on what's happening.

Clearly, that means challenges ahead for the established game companies which have to adapt quickly. That also means great opportunities for new players to come and disrupt the mobile game market!!

Friday, October 14, 2011

PhoneGap / Apache Callback project accepted in the Apache Incubator

The vote to accept the Apache Callback project in the Apache Software Foundation Incubator was open for the last 72 hours, and that vote just passed earlier today.

Apache Callback is the free open source software evolution of the popular PhoneGap project.

Apache Callback is a platform for building native mobile applications using HTML, CSS and JavaScript. Apache Callback allows Web developers to natively target Apple iOS, Google Android, RIM BlackBerry, Microsoft Windows Phone 7, HP webOS, Nokia Symbian and Samsung Bada with a single codebase. The Callback APIs are based on open Web standards and enable access to native device capabilities, including the device camera, compass, accelerometer, microphone or address book for example, from HTML5 apps.

That's really great news for the mobile app developer community! ... and I'll be watching this project very closely in the next few months!

Dennis Ritchie: The Shoulders Steve Jobs Stood On

Dennis Ritchie: The Shoulders Steve Jobs Stood On.

Another great loss. Dennis Ritchie was the father of the C Programming Language and the UNIX operating system, which pretty much the entire Internet and computer industry run on.

I'm writing this on a Macbook Air running Mac OS X, a UNIX-based system, and programs mostly written in C.

His book "The C Programming Language" was my first Computer Science book. My first serious encounter with software was a 2-week job to teach the C language to a team of programmers. I was just trying to make enough money to go on vacation, but then I got hooked and decided that I wanted to be a C coder... Feeling sad today.

#include <stdio.h>

main() {
  printf("goodbye, world\n");

Thank You Sir. RIP.

Caching Javascript in a Mobile Web app

Earlier this week I promised I'd post a useful technique to reference, pre-fetch, and cache Javascripts, as a follow up to my review of an article on HTMLRocks describing HTML5 techniques for optimizing mobile Web performance.

So, here it goes.

The HTML5Rocks article showed how to follow all the links in a page and pre-fetch and cache their target pages in local storage. That was a good starting point, but for a mobile Web app to really work well offline you also need to fetch and cache the referenced JavaScripts, CSS, and images.

Here's how I do that in my apps.

I place the following utility Javascript at the top of my page under the <head> element:
<script type="text/javascript";gt;
window.appcache = {};

// Get a resource
window.appcache.get = function(url) {
  var doc = localStorage.getItem(url);
  if (doc != null)
    return doc;
  var http = new XMLHttpRequest();"GET", url, false);
  if (http.status != 200)
    return null;
  localStorage.setItem(url, http.responseText);
  return http.responseText;

// Load a script
window.appcache.script = function(url, parent) {
  var e = document.createElement('script');
  e.type = 'text/javascript';
  e.text = window.appcache.get(url);

I'm not showing the error handling code here to keep it short, but you get the picture. The get(url) function downloads a resource and caches it, and the script(url, parent) function gets a resource and creates a script element with it.

Then to reference a Javascript script later in my page, instead of writing the usual:
<script type="text/javascript" src="foo.js"></script>

I write this:
<script type="text/javascript">
window.appcache.script('foo.js', document.head);

If you're loading pages in an <iframe> nested inside a main page as suggested in the HTML5Rocks article, you don't need to repeat the utility script in all your pages. Just have it once in the main page, then in the nested pages just write:
<script type="text/javascript">
window.parent.appcache.script('foo.js', document.head);

Or just set the appcache property on your <iframe> like this:
var nested = document.createElement('iframe');
nested.contentWindow.appcache = window.appcache;

Then you can write the same code everywhere:
<script type="text/javascript">
window.appcache.script('foo.js', document.head);

Hope this helps.

To the folks who've pinged me several times this week asking for this tip: Sorry for the delay but I've been kinda busy... I'll show how to fetch and cache CSS (similar to Javascripts) and images (using a data: url and base64-encoded image content) in the next few posts, probably this weekend.

Sunday, October 9, 2011

Offline Mobile Web Apps - Really...

A few days ago I posted a review of an article on HTMLRocks describing HTML5 techniques for optimizing mobile Web performance, in particular a technique to pre-fetch and cache HTML pages and resources, useful to improve navigation performance and allow the mobile Web app to work offline.

In that review I said:
"What HTML5Rocks doesn't describe is how to fetch and cache all the resources referenced by these pages, like CSS style sheets, JavaScript scripts, or images, and that's a lot more work..."

and concluded:
"My 2c... The techniques described in that HTML5Rocks post are fine to get you started, but implementing them is not so simple. As usual, the devil is in the details."

Since then a number of folks have pinged me to challenge and ask:
"So, how do you fetch the referenced resources? and what are these details?"

Fair question :) I'm going to address it in a series of posts in the next few days. I'll also describe a few additional issues and tricks to make a mobile Web app really work offline, and some solutions with code snippets.

Here's a draft outline of the next few posts:
  • Referencing, fetching and caching Javascript scripts
  • Caching CSS stylesheets
  • Caching images
  • HTML5 W3C application cache vs a DIY cache
  • Monitoring your network connection

Stay tuned...

Friday, October 7, 2011

Google Summer of Code 2011 T-shirt - Thanks Google

Received my Google Summer of Code 2011 T-shirt gift from Google today, as I was a mentor for GSoC @ Apache this year again.

Hoping my students have received theirs too (that may take a little longer as they're in Sri-Lanka).

I hope you guys had a lot fun with your coding projects this summer. I enjoyed working with you! Keep up the good work, and look for opportunities to work on and learn from Open Source again!

Silicon Valley Code Camp 2011 - By Developers, for Developers

Silicon Valley Code Camp 2011 is this weekend in  Los Altos Hills.

The camp is free to attend, run by volunteers, with lots of great speakers and interesting sessions. It's also a great opportunity to network with fellow developers.

Here's my session selection:

HTML5 Game Programming - WebGL Edition
Build a mobile web app with Sencha Touch
Faster Mobile Anyone?
Kids Programming Workshop with Scratch
Hands on jQuery Mobile

AutoMobile: the Next Hot Platform for Mobile Linux
Create a Kinect Powered Personal Robot with Microsoft Robotics Developer Studio
The Ecosystem of Context (exploiting user context to go beyond search)
GPU Accelerated Databases using OpenCL

Thursday, October 6, 2011

HTML5 rocks the mobile Web

Great post about HTML5 techniques for mobile on 'HTML5Rocks'. Not sure I agree with everything it says though.

It describes three HTML5 techniques for optimizing mobile Web performance:
  1. For smooth native-feeling sliding and flipping page transitions, use a translate3d(0,0,0) CSS transform to force the phone's Graphics Processing Unint to kick in and perform hardware-accelerated page compositing.

    3D comes at a price though... Hardware acceleration can quickly drain your phone's battery, and some fonts won't look as nice when composited in 3D on the iPhone for example, so you better choose them carefully.

  2. Manually fetch HTML pages and cache them in HTML5 local storage to speed up page navigations and enable the app to work offline (as you have all your pages stored locally).

    What HTML5Rocks doesn't describe is how to fetch and cache all the resources referenced by these pages, like CSS style sheets, JavaScript scripts, or images, and that's a lot more work...

  3. Listen to the network online/offline events and detect the connection type (Ethernet, Wifi, 3G, Edge) using the navigator.connection.type property.

    Well, the online/offline events are useful but navigator.connection.type is not supported on the iPhone (only on Android), and you can't rely on it anyway as sometimes an overloaded public Wifi will get slower than 3G... What you really needed is a measure of the quality of the end to end network connection in terms of bandwidth and latency, and you can get that by instrumenting your usage of XMLHttpRequest in your client code as well as your server code.

My 2c... The techniques described in that HTML5Rocks post are fine to get you started, but implementing them is not so simple. As usual, the devil is in the details.

Wednesday, October 5, 2011

Stay Hungry. Stay Foolish.

'Stay Hungry. Stay Foolish.' And I have always wished that for myself. - Steve Jobs


Remembering Steve Jobs

Apple's Board of Directors - We are deeply saddened to announce that Steve Jobs passed away today.

Steve's brilliance, passion and energy were the source of countless innovations that enrich and improve all of our lives. The world is immeasurably better because of Steve.

His greatest love was for his wife, Laurene, and his family. Our hearts go out to them and to all who were touched by his extraordinary gifts.

Snow Report - 8 inches at el. 8200

8 inches of fresh snow at elevation 8200 feet today.

Counting down 49 days to opening day, Nov 23.

1st Snow Storm of the Season!

Wednesday - Winter Warning - 100% chance of snow in Squaw Valley & Alpine Meadows - accumulation 3 to 7 inches possible.

The ski season is just a few weeks away. I'm expecting some of my new ski gear to be delivered on Thursday. Hoping to try it soon!

Winter Warning forecast details below, from my NOAA favorite site.

The postings on this site are my own and don’t necessarily represent positions, strategies or opinions of my employer IBM.