This is an archival copy of my PhD blog, which was active between 2009–2015. I'm publishing it again as a personal time capsule, but also because I think it's an interesting documentation of the PhD process itself, which might be useful to someone, somewhere. – Chris Marmo, January 2026

WWDC’11

It’s been a bit of a black hole on this blog recently – the promise of posting “essays” every few weeks scared me away from posting anything at all. I’ve been doing writing, but nothing that is ready for public consumption just yet. There will be writing published, but not for a few more weeks.

Over the past month I’ve been busy collecting data for my first case study, and a few days ago I arrived in San Francisco for WWDC, the Apple developer conference. I’m working on designs for a qualitative analysis tool for iOS devices, which I hope to build some interactive prototypes with while I’m here. The conference has a UI Lab with Apple designers, who critique your designs, and a few hands-on technical sessions around maps, visualisations, etc, which I’m hoping to apply to some prototypes.

This trip is the first of a few over the next few months – in July I’m presenting a poster and participating in a workshop on Geovisualisation at the ICA Conference in Paris, and in late August I’m heading to MobileHCI to participate in the doctoral consortium. “Excited” is an understatement!

So, I hope you can forgive the lack of content here recently. Normal programming will resume soon, now that I have this essay-monkey off my back.

Context: awareness vs sensitivity

I’ve been doing a lot of reading and writing around context awareness the past couple of months – so much so that I changed the subtitle of this site to include it. It’s safe to say that the notion of this kind of awareness completely captured my imagination, or at the very least, led me to line up a whole stack of journal articles and books on the topic.

With the plethora of location-based applications appearing on various mobile platforms, the ubiquitous nature of geo-tagged data and the popular medias seemingly undying thirst for the latest tech-innovation, location enjoyed a pretty good ride in 2010. Starting at location as a focus of research (as I did), it’s not long before you realise that a coordinate is just one piece of metadata that can describe context, and it seems like a natural progression to begin thinking about broader notions of the term.

The next thing you realise after reading all about the current attempts at context-awareness is that, well, they suck fail to be all that useful.

There are many very intelligent systems-based frameworks for building an architecture of sensors that can detect where and what you’re doing, and very detailed examples of software implementations that aim to interpret this sensor-based data to assist their users. It’s not that these frameworks and implementations are poor or under-thought, it’s simply that the technology isn’t there yet, and our expectations are too high.

Great Expectations

This isn’t our fault though – the term “aware” is loaded with expectation. It immediately conjures notions of Asimov-type robots that basically act and understand as we do – of computational uber-humans superior to us in every way – and ones that we will either grow to love or fear completely.

The problem is hinted at above – in the interpretation. Whilst we might have sensors that can pinpoint you on a map, know who you’re with, whether you’re talking or not, walking or not, whether you’re standing, sitting, or lying down, the problem lies in the translation of this sensorial information into meaningful, and accurate interpretations for software to use.

The optimist and sci-fi fan in me thinks that, one day, we will see a convergence of sensor technology and artificial intelligence that will provide useful scenarios to people. You might argue this happens already – a pilot’s cockpit springs to mind. But the fact remains that the detection of meaningful, dynamic and social context is a long way off.

Context is socially constructed

I’m working on a longer article on this at the moment, so I won’t go into too much detail. It is worth noting however, that whilst the cockpit of a plane is a highly controlled environment where all variables are know, much of what we would define as context is socially constructed. That is, its existence is fleeting, and only arises out of interaction between people, objects and the environment.

Whilst we may be able to detect your location fairly accurately, the context to your presence there is very difficult to detect. Test this next time you’re in a cafe – note all the different activities that are taking place there. The animated conversations, the quiet reading, the anxious waiting, the scurrying (or bored?) staff. For each of these actors, the place holds a completely different meaning for them – and hence, a different context.

Context Sensitivity

So if we can’t rely on technology to sense and interpret that kind of context, then what can we do? Well, I’m not sure I have any answers to this, but I would suggest that we first lower the expectations of and burden on our technology. When compared to “awareness”, a word like “sensitivity” seems much more realistic. We can’t do Bicentennial Man just yet, but what we can do is make intelligent assumptions about when, where and how our technology might be used, and we can selectively use sensed data to inform the design of our applications.

That is, I believe it is the role of design to augment the technology – instead of relying on technology to give us context awareness, we should rely on design to give us context sensitivity.

The missing network

One of the challenges of my project, from a technical perspective, is the non-urban environment it is situated in. Whilst it’s well and good to say that the solution will be context-aware and mobile, the reality is that the infrastructure that we take for granted in cities simply does not exist in a national park that is 3 hours from a major urban centre.

In one sense this is what (I hope) makes my research unique – we can’t gorge ourselves on infrastructure. 3G Networks are sparse, if not unrealiable, and WiFi is a pipe dream. GPS does exist and can be counted on, but when the core of the project is around the sharing and creation of knowledge, ideally in real-time, having an accurate reading of a ranger’s location is nice but not enough. There’s data involved, and quite possibly large quantities.

So – how do we facilitate this kind of knowledge ecosystem when the tubes are narrow, or don’t exist at all?

Well, the first thing a good researcher does is to peek over someone else’s shoulder. Although they may seem like polar opposites, the most similar environment I can think of that resembles a national park in terms of infrastructure is – wait for it – a plane.

This may change in the coming years (months?), but as it stands, planes have almost the same characteristics as a park. Sparse network access with absolutely no data connection, GPS works, but only if you’re even allowed to use your location-aware device at all.

Offline data storage is the obvious answer. WindowSeat App is an iOS application that stores offline data about points of interest you may be hurtling over at a given point in time. Its data set is fairly finite, so this model works well – however, in an app that is perceived as, and by necessity is, disconnected, how much of a barrier will there be to people wanting to contribute back to that data pool? Can we rely on people to sync, or should we be bold enough to make that decision for them?

The above picture is a phone tower disguised as a tree in Masai Mara National Park, Kenya